Wednesday, December 16, 2009

WordPress CLI Import

So a few weeks back I wrote some code to export a WordPress blog via the command line. Now, to come full circle I finished dissecting the code to import a WordPress blog via CLI.

Now before proceding, I highly recommend going through the import/export process via the admin GUI if you have not done so already. Doing so will let you understand fully what the entire process is like and can see why some parts of code were written.

At this point I assume that an xml export has already been made; if not, go make one!

Here's the entire code:

if ($argc < 4) {
print "Usage: $argv[0] from_topic to_topic xml_file\n";
exit;
}


$user_login = $argv[2];
include 'wp-config.php';
include 'wp-admin/includes/import.php';
include 'wp-includes/registration.php';
include 'wp-admin/includes/post.php';
include 'wp-admin/includes/taxonomy.php';
include 'wp-admin/import/wordpress.php';

// the WP_Import class looks for POST values for author_in and user_select
// author_in is an array of users to import
$_POST['author_in'][1] = $argv[1];

// user_select is an array of the selected user's ID for posts to be mapped to
// if empty, the user in author_in will be created
$userdata = get_userdatabylogin( $user_login );
$_POST['user_select'][1] = $userdata->ID;

$wp_import = new WP_Import();
$wp_import->import_file($argv[3]);


The code is short and simple. The only part that was hard to understand was how the author_in and user_select POST arrays were used so, I'm going to take a minute to give a brief explanation.

$_POST['author_in'][1] = $argv[1];

Right now, I only have a need to import a 1-to-1 user mapping; meaning: import one user's posts and map them to an existing user. Though of course, with a little more work, this code can support a multiple user import.

The author_in array is a 0...n key array where the values are the users to-be-imported. Remember though, you'll need to remember what key you use since it'll be the same key for the user_select array.


$userdata = get_userdatabylogin( $user_login );
$_POST['user_select'][1] = $userdata->ID;


Now, the user_select array is made up of the existing blog's user's IDs where the key is the same key for whatever was used for the author_in array. (Yes, this is what confused me!)

If you wanted to import multiple users' blogs you'll understand these two arrays better.

Example
Users to be imported: jim, mary
To be mapped to these users: frank, sue (respectively)

Now the arrays would look something like this:

$_POST['author_in'][0] = 'jim';
$_POST['author_in'][1] = 'mary';

$userdata = get_userdatabylogin( 'frank' );
$_POST['user_select'][0] = $userdata->ID;
$userdata = get_userdatabylogin( 'sue' );
$_POST['user_select'][1] = $userdata->ID;


Now hopefully you'll see what's going on now if you look at the array keys!

Enjoy! Cheers!

Monday, December 14, 2009

Help Save MySQL!

We are using MySQL, help save it!

Creator of MySQL, Michael "Monty" Widenius, blogged this passed Saturday urging all MySQL users to show their support. As we all know already, Oracle has acquired Sun and this puts MySQL's future in their hands. Monty outlines the dangers that are near if Oracle gets their way. The main form of showing support is to email the European Commission.

Monty was even kind enough to set up email templates so now all you need to do is copy/paste! I've reposted below:


Send this to: comp-merger-registry@ec.europa.eu
If you want to keep us updated, send a copy to ec@askmonty.org

If you have extra time to help, fill in the following, if not, just skip to the main text.

Name:
Title:
Company:
Size of company:
How many MySQL installations:
Total data stored in MySQL (megabyte):
For what type of applications is MySQL used:
Should this email be kept confidential by EC: Yes/No

Copy or use one of the below texts as a base for your answer:

a)
I don't trust that Oracle will take good care of MySQL and MySQL should be divested to another company or foundation that have everything to gain by developing and promoting MySQL. One should also in the future be able to combine MySQL with closed source application (either by exceptions, a more permissive license or be able to dual license MySQL under favourable terms)

b)

I think that Oracle could be a good steward of MySQL, but I would need EC to have legally binding guarantees from Oracle that:
- All of MySQL will continue to be fully Open Source/free software in the future (no closed source modules).
- Open Source version and dual-licensed version of MySQL should have same source (like today).
- That development will be done in community friendly way.
- The manual should be released under a permissive license (so that one can fork it, the same way one can fork the server)
- That MySQL should be released under a more permissive license to ensure that forks can truly compete with Oracle if Oracle is not a good steward after all.
Alternatively:
- One should be able to always buy low priced commercial licenses for MySQL.
- All of the above should be perpetual and irrevocable.

There should also be mechanism so that if Oracle is not doing what is expected of it, forks should be able to compete with Oracle

c)
I trust Oracle and I suggest that EC will approve the deal unconditionally.


--------------------

Wednesday, November 25, 2009

WordPress CLI Export

Why would you want to use the command line to export your WordPress blog? Well, if you are like me and manage 300+ blogs, you'll want an easy way to export a blog without logging into each and every WordPress admin page to click around and export that xml file.

Anyhow, if you poke around in the wp-admin/export.php page, you'll notice that it calls a function export_wp (which is defined at wp-admin/includes/export.php). And voila! Amazingly that's all you need!

It'll spit out the xml to stdout so you'll want to grab that buffer! See the code below to generally see how I did things.

Simple. Very useful! Cheers!


include 'wp-config.php';
include 'wp-admin/includes/export.php';

ob_start();
export_wp();
$file = ob_get_contents();
ob_end_clean();

$fh = fopen("wordpress-" . date('Y-m-d') . ".xml", 'w');
fwrite($fh, $file);
fclose($fh);

Thursday, August 27, 2009

Quick Bash One-Liner

Well, I'm here trying to wrap up things for the day and just found a very useful technique to get a directory without the filename from the listing if using find.

So, I'm sure you were all thinking, "well why don't you just use `find` with `-type d`?" Well, for this specific task I will be using `find` to get a list of all files and then I need whatever that directory is for another script. So booyah, in your face! (hehe jk)

So, apparently if I have a directory structure string with a file at the end of it I can drop the end part like so:

f1=/mnt/www/tmpdrin/dir1/dir2/fileindir.txt
f2=${f1%/*};
## $f2 will now be /mnt/www/tmpdrin/dir1/dir2


Sweet as!


And if you were curious how I was using it as a one-liner, here ya go:

for f in `find /mnt/www/tmpdrin/ -type f`; do f2=`echo "$f" | sed -e 's!/mnt/!!g'`; f3=${f2%/*}; echo "$f3"; done


Obviously, I'm doing a lot more here than just echoing, but this is just an example mmmkay? ;)

Cheers!

Post Script: It's been a while for posts, but I've just been working on proprietary stuff lately and nothing would be too useful for ya'll. Sorry!

Tuesday, June 23, 2009

Rant: Stupid Debugging Day!

This is not going to be a tech recipe or anything productive. In fact, it's more of a kick myself in the arse type-of-deal. --and I guess also a reminder to myself if/when I have to make this change again.

Anyhow, here's the story in a nutshell: my project manager tasks me to roll out my code for our next version release to our test group. All goes well EXCEPT for one (yes, that's right 1) site. Odd. So, now I suppose it must be some rare issue with corrupt data in the database. I make a dump of the production db for that site and bring it down to beta to test. I run it through the upgrade; success! Interesting.

So now, I'm back again to try this upgrade again in production. Yup, fails again. Crap! So some fresh eyes need to take a look things over so I ask my manager to take a look. The issue puzzles him for a little bit, but then he IMs, "hold. I think I know why." :seconds pass: "working now?" And yes it is!

An error in the apache log sheds some light to the situation. Apache is complaining that an .htm file is missing in the root directory of my app. Apparently, this one site I needed to upgrade happened to be the first site in the apache conf and it didn't have this one file that the Netscaler needed. Awesome. Had nothing to do with my code. Spent pretty much the entire day debugging code to see what was going on. Lame...

At least now I know for future reference.

Cheers!

Friday, June 5, 2009

Update: WordPress - Modifying Post Content

Earlier in May I wrote up this post about modifying WordPress post content and prefixing URIs to go through a tracker. Well, one of the web designers here found this ridiculously rare bug. Apparently if you have content that has a URI that breaks at the end of a line because of word wrap, the trackURL function will not process. Weird!

So, I tested and confirmed and basically rewrote the entire function to one line. Who knew?

Anyhow here it is in all of its glory:

function trackURL($content) {
$content = preg_replace('/\< a(\s|\n|[^href])href=\"([^"]+)\"\>([^<]+)\<\/a\>/','$3',$content);
return $content;
}


Blogger is giving me gripe about the regex, so here's a screenshot:


Cheers!

Wednesday, May 27, 2009

Apache: "Redirecting" Missing Images with a Default Image

We all know how horrible sites with broken image links are. If you maintain a lot of sites, you can't give your attention to each and every image detail on every site. So what about displaying a default image instead when an image is missing? Well, that will work perfectly fine as long as the images are stored on your server and are not external image links; and this is real simple to do with Apache's mod_rewrite!

Try this code in your vhost entry:

RewriteEngine On
RewriteCond "%{DOCUMENT_ROOT}%{REQUEST_URI}" !-f
RewriteRule \.(gif|jpe?g|png|bmp) /path/to/your/404.gif [NC,L]


Cheers!

Note: This idea was found on the web. At the time of this writing, I cannot find the original source. But if I do, I will properly credit them!

Update A forum link was found as a reference:http://www.webmasterworld.com/apache/3274493.htm

Wednesday, May 20, 2009

Getting Unique Entries From Two Files

Okay, so you have data in one file and then you have updated data in another file. Let's say you have a list of directories that have been searched and they all have a filename, "0.txt". Another file lists all of the directories that have this "0.txt" file but they are actually empty. If you diff the files that will only give you changes being made and won't necessarily give you the entries NOT in the updated file. But, if you get all of the directory entries that are NOT in the new list, then you have a complete list of directories that have a valid file!

*(Well, that was just my example, my real life scenario involved MySQL data.)

Anyhow, with this PHP code you can basically go through both files and utilize in_array to see if certain entries exist in one file or not.

**Note: I wanted to use $argv to handle the arguments, but for some reason file didn't like it. Curious...

$f1 = "/tmp/original.txt";
$f2 = "/tmp/modified.txt";

$a = file($f1);
$b = file($f2);

foreach ($a as $k => $v) {
if (!in_array($v, $b)) {
print "$v";
}
}


Cheers!

Friday, May 8, 2009

WordPress: Modifying Post Content Automatically

Have you ever had the need to have specific content on every single blog post? The way that sounds, I'm sure you'll probably say, "no". But, think about it now, the content does include the HTML. OIC...

So here at work, I've gotten a request to have all links be passed through a link tracker that we built in-house. "Why?" Well, we want to do some sort of testing so we want to keep track of all clicks on links. So that basically means, we need to change all URIs to have the link tracker URI appended by the actual URI that the user put.

Not bad, we can use the WordPress filter, "the_content", and that will get the job done!

Now let's look at the trackURL function:

preg_match_all("/href=\"(.*)\"/",$content, $urls, PREG_PATTERN_ORDER);
$uniqueurls = array_unique($urls[1]);
foreach ($uniqueurls as $url) {
if (!preg_match('/trackerUri.com/', $url)) {
$content = str_replace($url, "http://trackerUri.com/?ab=c/hij&ku=" . $url, $content);
}


It is straightforward; it goes through the content that it is given and finds any "href" references. After those are found, it does a search & replace to insert the new tracker URI before the user-inputed URI.

The rest utilizes WordPress's built-in functions to update the post_content. Easy!

Cheers!


function trackURL($content) {
global $post;
preg_match_all("/href=\"(.*)\"/",$content, $urls, PREG_PATTERN_ORDER);
$uniqueurls = array_unique($urls[1]);
foreach ($uniqueurls as $url) {
if (!preg_match('/trackerUri.com/', $url)) {
$content = str_replace($url, "http://trackerUri.com/?ab=c/hij&ku=" . $url, $content);
}
}
return $content;
}

function trackContent($content) {
global $post;
$pcontent = array();
$pcontent['ID'] = $post->ID;
$pcontent['post_content'] = clicktrackURL($content);
wp_update_post($pcontent);
return $content;
}
add_filter('the_content', 'trackContent');

Wednesday, May 6, 2009

WordPress: update_option ping_sites via PHP CLI

I really hate relying on other site's servers for anything. For obvious reasons, if their site is slow, well that means whatever functionality you're relying on will be slow as well.

In this latest case, technorati.com's blog update service was timing out 'causing our WordPress posts to take up to 2 minutes. Not fun! So, sorry technorati, we're replacing you with google's update services until you get your act together. :)

Now, why am I doing this via CLI? Well, I manage 300+ WordPress blogs, so with a bash wrapper, I can give this script the site and then it will connect and run an update_option and we're good to go!

Yay!


include '/mnt/www/wordpress/wp-config.php';
global $wpdb;
$pingsites = get_option('ping_sites');
$newpingsites = str_replace('http://rpc.technorati.com/rpc/ping','http://blogsearch.google.com/ping/RPC2',$pingsites);
$newpingsites = str_replace('http://rpc.icerocket.com:10080/','',$newpingsites);
echo $newpingsites . "\n";

echo "updating $argv[1]'s ping_sites\n";

update_option('ping_sites', $newpingsites);

}

Friday, April 10, 2009

JavaScript: Get the Current URI's Domain Name

We have consultants that do some work for us (who doesn't?). I have a beta site setup for them to test with. But for some reason when I check the beta site, the URI's for this one part are going directly to the production site. "Hmmm...why...oh huh?! why!?" So it turns out they were hard-coding a domain root as part of the URI in one of their javascript runs. Now really, what kind of portability does that give me!?

Their code looked similar to this:var uri = 'http://drincruz.blogspot.com/' + uriPath;

So, I figured I'll fix this little nuisance by writing a quick function that will get the domain root of a URI. Now, I'm sure there's some sort of built-in function that'll get me what I need, but I couldn't find it. So if there is, someone please let me know!


function getDomainRoot()
{
var uri = location.href;
var startIndex;
if (null != uri.match("file:///") )
startIndex = 8;
if (null != uri.match("http://") )
startIndex = 7;
if (null != uri.match("https://") )
startIndex = 8;
var endIndex = uri.indexOf("/", startIndex);
var domainRoot = uri.substring(0, endIndex);
return domainRoot;
}


Now, I can put this function into their javascript and it can now be used like this:
var uri = getDomainRoot + '/' + uriPath;

/* I should really add a trailing slash check. Ah well. Next time! */

Cheers!

Tuesday, April 7, 2009

Moving a MySQL Cluster to New Hardware

Well, it came to that point in any server's life, it could no longer add any additional disks for space, the cost to upgrade the memory and/or hard disks don't outweigh the benefits of upgrading to new hardware; it's time to put these babies to sleep.

So one of our active/passive MySQL clusters had a three hour maintenance window last night (technically this morning: 12am-3am). The servers came in a few weeks earlier so I had them prepped beforehand. Though, how do you minimize downtime while moving moving the databases over? Well, obviously don't take down the server unless absolutely necessary and of course automate as much of the process as possible!

12am: I'll kick off my script that backs up all DBs in our cluster and saves them in a mount point on a SAN that the new servers can read as well.
## I really could have run this process before 12am as it didn't cause an outage. Ah well, next time.

~12:45am: Now that we have the data, we can now take down the MySQL cluster. I basically rebooted the cluster servers with new IP addresses and hostnames. I now set the IP addresses of the new cluster boxes to have the original IP addresses of the old cluster. See what I did there? Basically, we have a lot of web apps pointing to the DNS of the MySQL cluster, so we need that DNS. (Yes, yes, we could've just updated DNS as well; it's a long story, but basically my department doesn't handle DNS, so going this route is more seamless.)

I already set up replication in MySQL, but as the hostnames have now changed, well replication broke. This was the only snag in the maintenance really. Basically, what happened was MySQL was looking for the old relay logs with the old hostname. Enter, MySQL's nice documentation. I just needed to do this really:

shell< cat new_host_name-relay-bin.index >> old_host_name-relay-bin.index

shell< mv old_host_name-relay-bin.index new_host_name-relay-bin.index


1am: Now that replication is working, all I needed to do was load the data, which is one last script to run.

Here's the basic rundown:
- find the db dump
- create the db
- gunzip the db dump to a tmp dir
- load the dump to the db
- run GRANTs for whatever user


#!/usr/bin/perl -w

# mysqldumpload.pl

use strict;
use DBI;
use File::Find;

my $dbuser = qw(dbuser);
my $dbpass = qw(dbpass);
my $backupdir = "/mnt/backup/mysql";
my $dumpdir = "/usr/local/tmp/dump";

my $dbh = DBI->connect("DBI:mysql:mysql",$dbuser,$dbpass)
or die "Couldn't connect to db: $!";

my @files;

find(\&findFile, $backupdir);


sub findFile {
my $file = $File::Find::name;
# if it's a db, another_db, or another_db2 process it!
if ($_ =~ /db/ or $_ =~ /another_db/ or $_ =~ /another_db2/) {
print "Processing $_...\n";
my $dump = $_;
$dump =~ s/\.gz//g;
my $mysqldb = $dump;
$mysqldb =~ s/\.sql//g;
my $sql = qq!CREATE DATABASE $mysqldb!;
$sth = $dbh->prepare($sql);
print "$sql\n";
$sth->execute();
my $dumpfile = "$dumpdir/$dump";
my $gunzip = qq!gunzip -c $file > $dumpfile!;
print "$gunzip\n";
system($gunzip);
my $mysqlload = qq!mysql --user=$dbuser --password=$dbpass $mysqldb < $dumpfile!;
print "$mysqlload\n";
system($mysqlload);
if ($_ =~ m/db/) {
$sql = qq!GRANT ALL PRIVILEGES on `$mysqldb`.* TO 'dbuser'\@'%'!;
$sth = $dbh->prepare($sql);
print "$sql\n";
$sth->execute();
}
if ($_ =~ m/another_db/) {
$sql = qq!GRANT ALL PRIVILEGES on `$mysqldb`.* TO 'another_db'\@'%'!;
$sth = $dbh->prepare($sql);
print "$sql\n";
$sth->execute();
}
}
}


~1:45am: The script finally finished with the last DB around 1:45am. Not too bad!

Now, as you can see, we have a fairly simple setup. We only wanted to move over certain DBs and only needed a few db users to grant privileges to. With minor modifications to this script, you can have it fit your needs.

*NOTE: A lot of the prep work (i.e. adding mysql users) was done earlier, but can probably be done in one shot as well.

1400+ databases successfully moved to a new cluster, yay! Cheers!

Friday, April 3, 2009

WordPress Plugin: Change Directory Location of Smilies

We recently moved our WordPress smilies to our caching server (for obvious reasons). Well, we then needed to have WordPress use the new URI. We always want to leave core WordPress code intact unless absolutely necessary, so I wrote this quick plugin to search for the old/original URI and use the new one.

Plain. Simple.

Cheers!


function custom_smilies($text) {
$output = str_replace(get_settings('siteurl') . "/wp-includes/images/smilies/", "http://new.location.com/img/wp/", $text);
return $output;
}

add_filter('comment_text', 'custom_smilies', 30);


Note: This works for WordPress 2.0.1. I'll try and test with 2.7 and any other installations I have installed already later. :)

Friday, March 27, 2009

Perl Search and Replace an Entire Directory Tree

I recently had to do a search/replace for a couple of directories here at work. I know you're probably thinking why didn't I just do a find | xargs sed, well good point, though there is actually more to the code that is not displayed that worked better in perl. So ha! ;)

Anyhow, my code utilizes File::Find. If you've never used it, check it out. It's very useful. The process is easy; use find for a directory root and write a sub to process against the files.

Nothing too fancy here, but if you don't understand anything feel free to shoot me a message.

Cheers!


use strict;
use File::Find;

my $dir = "/mnt/www";

find(\&doFile, $dir);

sub doFile {
my $file = $File::Find::name;
my $fdir = $File::Find::dir;
open (FILE, $file) or die "Couldn't open file: $file $!\n";
my @foundfiles;
while () {
if ($_ =~ /your_search_regex/)
{
print "found: $file\n";
push(@foundfiles, $file);
}
}
close(FILE);

foreach my $f (@foundfiles) {
my $replace = qq~/usr/bin/perl -pi -e 's,your_search_regex,your_replacement,g' $file~;
print "$replace\n";
system($replace);
}
}

Friday, March 20, 2009

WordPress Plugin: Change Admin Pagination

This plugin has been tested on WordPress Version 2.7.

Are you seeing enough posts in your WordPress admin screen? Or are you seeing too much?

Well, either way I've written a plugin to accomodate for the lack of choice for posts-per-page. First, please note this is unfortunately not a standard install for a WordPress plugin. Because WordPress hardcoded a static value (15) for posts-per-page, I needed to modify some of WordPress' core-code. So, continue at your own risk, copy/paste with caution, make backups often!

We will be editing the following files: wp-admin/edit.php and wp-admin/includes/post.php

Let's start with wp-admin/includes/post.php. Look around line 806, you should see this line: wp("post_type=post&what_to_show=posts$post_status_q&posts_per_page=15&order=$order&orderby=$orderby"); Aha! This is where they hardcode the posts_per_page to 15.

We're going to add a global variable line and replace the 15 with that global variable.

global $ppp;
wp("post_type=post&what_to_show=posts$post_status_q&posts_per_page=$ppp&order=$order&orderby=$orderby");

Now, let's go and edit the wp-admin/edit.php page. Look around line 250, you should see this code block:

__( 'Displaying %s–%s of %s' ) . '%s',
number_format_i18n( ( $_GET['paged'] - 1 ) * $wp_query->query_vars['posts_per_page'] + 1 ),
number_format_i18n( min( $_GET['paged'] * $wp_query->query_vars['posts_per_page'], $wp_query->found_posts ) ),
number_format_i18n( $wp_query->found_posts ),
$page_links
); echo $page_links_text; ?><\/div>
<\ ?php } ?>


We're going to add the following to make it look like this:
); echo $page_links_text;
if (function_exists(writePostsPerPage)) {
echo ' Posts per page';
echo writePostsPerPage(15);
echo writePostsPerPage(25);
echo writePostsPerPage(35);
}
?>
<\/div>
<\ ?php } ?>


That's all of the core code you'll be editing. Nothing too major! Now, all you need to do is add this following plugin code to a file and activate the plugin. Now, you should see a "Posts per page" option in your Edit posts screen.



/*
Plugin Name: Change Admin Pagination
Description: Change the pagination values in the Admin section for Posts and Pages
Version: 1.0
Plugin URI:
Author: Adrian J. Cruz
Author URI: http://drincruz.blogspot.com/
License:
Copyright 2009 Adrian J. Cruz (email : drincruz at gmail.com)

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA

Changelog:
2009-03-20: didn't want as many core code changes, so moving as much as i can
to this plugin.

*/

global $ppp;
$ppp = $_GET['posts_per_page'];

/**
* This changes the posts_per_page option.
*/
function updatePostsPerPage($ppp) {
if (!$ppp)
return;
$old_ppp = get_option('posts_per_page');
if ($ppp == $old_ppp)
return;
else
update_option('posts_per_page', $ppp);
}

/**
* This will return the proper URL for the posts_per_page links.
*/
function writePostsPerPage ($n) {
if (!$n) { return; }
$ppp_url = "<\a href=\"?paged=" . $_GET['paged'] . "&posts_per_page=" . $n . "\">$n<\/a>";
return $ppp_url;
}

/**
* We want to update_option before the admin page is loaded so the page
* will take the changes immediately.
*/
add_filter('admin_head', updatePostsPerPage($ppp));


*Note: I know blogger doesn't handle source code very well within its 'code' tags, sorry about that! I think I'll try and find a free fileshare where I can keep source code and you can just download. I'll keep you guys posted.

Cheers!

Monday, March 16, 2009

Adding Widgets to Your WordPress Dashboard

Well, first off, I take no credit for most of this information you're about to read. I just tweaked the code I got from wpengineer.com, so go check out that site for the original details.

Anyhow, we're recently in the process of beta'ing WordPress 2.7 (from 2.0.1). We wrote a lot of custom code on top of WordPress already and most of it worked with little or no help. One of the recent complaints from one of our test users was regarding the new dashboard and the lack of "recent posts" and "scheduled posts".

If you haven't used WordPress 2.0.1, well there used to be a list of recent published posts and scheduled posts right on the dashboard. Honestly, in 2.7 the data is still there, just a few more clicks away. But anyway, to please our users, I wrote this plugin to add two widgets, "My Recent Posts" and "My Scheduled Posts".

It's loosely called "AddDashboard" for now. I'll try and come up with something snazzier later.

Enjoy. Cheers!

post script: I've "escaped" some of the code such as: <\?php


<\?php
/*
Plugin Name: AddDashboard
Plugin URI:
Description: This plugin adds on to the WP Dashboard
Version: 0.1
Author: Adrian J. Cruz
Author URI:
License:
Changelog:
*/


/**
* Content of Dashboard-Widget
*/
function my_wp_dashboard_recent() {
$result = get_recent_posts(10, publish);
if (!$result) { echo 'No recent posts.'; }
else {
foreach($result as $key => $value) {
print "<\a href=\"post.php?action=edit&post=" . $value['ID'] . "\">"
. $value['ID'] . ": " . $value['post_title'] . "<\/a>
";
}
}
}

function my_scheduled_posts() {
$result = get_recent_posts(10, future);
if (!$result) { echo 'No posts scheduled to be published.'; }
else {
foreach($result as $key => $value) {
print "<\a href=\"post.php?action=edit&post=" . $value['ID'] . "\">"
. $value['ID'] . ": " . $value['post_title'] . "<\/a>
";
}
}
}

/**
* add Dashboard Widget via function wp_add_dashboard_widget()
*/
function my_wp_dashboard_setup() {
wp_add_dashboard_widget( 'my_wp_dashboard_recent', __( 'My Recent Posts' ), 'my_wp_dashboard_recent' );
}

function my_scheduled_posts_setup() {
wp_add_dashboard_widget( 'my_scheduled_posts', __( 'My Scheduled Posts' ), 'my_scheduled_posts' );
}

/**
* use hook, to integrate new widget
*/
add_action('wp_dashboard_setup', 'my_wp_dashboard_setup');
add_action('wp_dashboard_setup', 'my_scheduled_posts_setup');

function get_recent_posts($num = 10, $status) {
global $wpdb;

// Set the limit clause, if we got a limit
$num = (int) $num;
if ($num) {
$limit = "LIMIT $num";
}
// if no $status default to publish
if (!status) {
$status = "publish";
}

$sql = "SELECT * FROM $wpdb->posts WHERE post_type = 'post' AND post_status='$status' ORDER BY post_date DESC $limit";
$result = $wpdb->get_results($sql,ARRAY_A);

return $result ? $result : array();
}
?>

Friday, March 13, 2009

Command Line Upgrade Wordpress Databases

Oh yay, we're going to upgrade WordPress here at work! Now we have a couple hundred WordPress databases at work, so I don't want to have to manually load up a web browser and click the "WordPress Upgrade" button a hundred or so times. In technology, if you have to do the same process over and over, you should always look for a way to automate it, and that is what I've done here.

The PHP script is rather simple. If you look at it, it basically defines which database, db user, db password, db server, etc. and then it runs WordPress' wp_upgrade() function. That simple! Honestly, the hardest part (for me at least) was figuring which WordPress files I needed to include and what order they go in! (In the beginning, I traversed the order of my require statements. Doh!)



\<\?php //this gets hosed by blogger so i'm trying to "escape" it

if ($argc != 2) {
print "Usage: php $argv[0] [wpdb]\n";
exit;
}
$db = $argv[1];
define( 'ABSPATH', dirname(__FILE__) . '/' );
define('WP_ADMIN', true);
define('DB_NAME', $db);
define('DB_USER', 'wpuser'); // Your MySQL username
define('DB_PASSWORD', 'wppass'); // ...and password
define('DB_HOST', 'mysql.server.com'); // 99% chance you won't need to change this value
require('wp-load.php');
require('wp-admin/includes/upgrade.php');
print "Upgrading " . DB_NAME . "...\n";
wp_upgrade();
print "\n\nUpgrade for " . DB_NAME . " complete!\n";
?>


Now as you can see, I set up my PHP script to handle the database name as an argument. So now, all I need to do is write a bash wrapper script! I haven't done it yet, but it'll basically be similar to this:


for db in `cat listofdbs.txt`; do php ajc-wpupgrade.php $db; done;


And Bob's your uncle!

Thursday, March 5, 2009

Reset FreeBSD root Password

I have various different VMs for testing purposes. I needed to test something on FreeBSD and luckily I already had a VM setup. But, hmmm...I forgot the root password!

So, if you happened to ever forget the root password for one of your FreeBSD servers, fear not, you can reset it! Well, of course if you have access to boot the server into single user mode of course! ;)

The process is quick and painless; select the default shell once in single user mode (/bin/sh). After that just enter these commands to change the root password:


# mount -u /
# mount -a
# passwd


Obviously, if you've used passwd before it'll prompt you for the new password and to confirm. After that, all you need to do is exit or sync;sync; reboot.

That's it. Cheers!

Tuesday, March 3, 2009

Check MySQL Tables Across Multiple Databases

If you manage multiple databases on a MySQL server that are identical in architecture, but unique in data well I'm sure you may come across this issue at some point in time. This issue being that, the MySQL err log shows an error on a table, but it doesn't specify which database!

(Error looks like this: # 090217 18:38:22 [ERROR] /usr/local/bin/mysqld: Can't open file: 'wp_comments.MYI' (errno: 144)
)

Now sure, you can just run a mysqlcheck on all-databases' tables, but obviously that may take a while, especially if you manage a lot of databases.

So, remember just a few blog posts before, I mentioned a way of monitoring MySQL's err log. Well, to be honest, that was actually a product of my original idea; that idea being:
- monitor the MySQL err log for table errors
- if there is a table error, run a mysqlcheck for that specific table against all databases
- email the results

Well, why only search for one specific error if you can search for them all? So, I broke my script down into two; an err log monitor and a mysqlcheck. Same as always, tailor the code to fit your environment. For example in this line: if ($dbdata->[0] =~ m/^wp_.+/) { I'm only going to search through our Wordpress databases and skip everything else. You can change the regex so that you will only check databases that begin with the letter 'a' if that fits your situation (/^a.+/).

I'm sure you get the point! Eventually (when I'm less busy), I'll modify some of my code so that this will run more of an automated process. --All you need to do is have the err log script write to a file with the tables giving errors and then write a wrapper to process the err log script, then process the mysqlcheck script. But, that's for another day kids!

Cheers!


#!/usr/bin/perl -w
#
# chkmysqltables.pl v1.0
# 2009-02-17
# Adrian J. Cruz
#

use strict;
use DBI;

if (!$ARGV[0]) {
print "Usage: $0 \" [db_table]\"\n";
print "Note: Use quotes if you are checking multiple tables.\n";
exit;
}

my $db_table = $ARGV[0];

my $mysqlu = "dbuser";
my $mysqlp = qw(dbpass);
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) =
localtime(time);
my $yy = sprintf("%02d", $year % 100);
$year += 1900;
$mon += 1;
if ($mon < 10) { $mon = 0 . $mon; }
if ($mday < 10) { $mday = 0 . $mday; }
my $todaylog = $year . $mon . $mday . $hour . $min;
my $runlog = "/root/log/chkmysqltables.$todaylog.log";

my @tables;
push @tables, $db_table;
# we only want the unique tables
my %sorthash;
@sorthash{@tables} = ();
my @error_tables = keys %sorthash;

my @dbs;
my $dbh = DBI->connect("DBI:mysql:mysql",$mysqlu,$mysqlp)
|| die "Unable to connect: $!\n";
my $query = "SHOW DATABASES";
my $sth = $dbh->prepare($query)
|| die "Could not prepare: $!\n";
$sth->execute || die "Could not execute: $!\n";
while (my $dbdata = $sth->fetchrow_arrayref) {
if ($dbdata->[0] =~ m/^wp_.+/) {
push(@dbs,$dbdata->[0]);
}
}
$sth->finish;

open(LOG, ">>$runlog");
print LOG "$todaylog\n";
for my $t (@error_tables) {
for my $db (@dbs) {
system "/usr/local/bin/mysqlcheck --password=$mysqlp $db $t >> $runlog\n";
}
}
print LOG "--END LOG--\n";
close(LOG);

my $email_message;
open(MSG, "<$runlog");
$email_message = do {local $/; };
close(MSG);

sendMail( "you\@email.com", "mysqlerr\@email.com",
"mysqlerr: $db_table", "$email_message" );

sub sendMail {
my ($to, $from, $subject, $message) = @_;
my $sendmail = "/usr/sbin/sendmail";
open(MAIL, "|$sendmail -oi -t");
print MAIL "From: $from\n";
print MAIL "To: $to\n";
print MAIL "Subject: $subject\n\n";
print MAIL "$message\n";
close(MAIL);
}

Tuesday, February 24, 2009

Processing a Mailbox With POP and Perl

My company sends out newsletters. Of course, for whatever reason sometimes a user no longer wishes to receive these newsletters; and sometimes a user will continue to receive newsletters and instead of using the "unsubscribe" links in the newsletter, they will just mark the email as junk. So, then their ISP, email provider, etc. will then do their job to send us an email saying that there has been a complaint about us spamming. The routine continues and the emails go round and round.

Here is where our "Auto Unsubscribe" mailbox comes to play. We have a mailbox setup where email providers forward emails of complaints. They pretty much always keep the original email message intact, so the auto unsubscribe job is easy: search through the mailbox for the data you need; the unsubscriber's email address and the newsletter.

Finding the correct data is simple on our end as long as the original newsletter email is intact. At the bottom of all of our newsletters is a link to an unsubscribe page. Now, in plain text this is obviously just going to be a line with a URI and parameters. For example, http://www.server.com/unsubscribe.htm?newsletter=ABC&e=youremail@mail.com

You see where I'm going here? You've got all the data you need and now you can do what you want with it.

Cheers!

NOTE: I'm using the Mail::POP3Client module because I personally needed to use SSL when connecting to the mailbox. You can easily use Net::POP3, Net::IMAP, etc. just the same.


#!/usr/bin/perl
#
# unsub-pop.pl v1.0
# Adrian J. Cruz
# 2009-02-19

use strict;
use Mail::POP3Client;

my $mail_server = qw(mail.server.com);
my $mail_user = qw(user@server.com);
my $mail_pass = qw(pass123);
my $pop = new Mail::POP3Client( USER => $mail_user,
PASSWORD => $mail_pass,
HOST => $mail_server,
USESSL => "true");
$pop->Connect
or die "couldn't connect: $!\n";

# find newsletter and email
my @ecunsubs;
for (my $i = 1; $i <= $pop->Count(); $i++) {
foreach ($pop->Body($i)) {
if (/unsubscribe\.htm\?newsletter\=(.+\&e\=.+)\"/) {
push(@ecunsubs,$1);
}
}
# mark msg to be deleted
$pop->Delete($i);
}
$pop->Close;

# remove duplicates so now we process only unique entries
my %sorthash;
@sorthash{@ecunsubs} = ();
my @unsubs = keys %sorthash;
...
...
etc.
...

Wednesday, February 18, 2009

Monitoring the MySQL ERR Log

If you've ever wanted a quick and easy way to monitor MySQL for errors, well, you've stumbled onto the right place.

At work we've been getting these corrupt tables that go unnoticed until an end-user notifies us. Which, of course, for any website is utterly disastrous since some part of your site (even if it is just a minuscule widget on the site) is now broken. So, being the proactive technologists that we are we decided to monitor the MySQL err log daily. You can find your err log in the "data" directory where you installed MySQL (it defaults to the name: hostname.err).

What this perl script does is parse through the err log and find today's ERRORs. After it finds the errors, it then emails them. So, obviously go through and change the variables to your likings and set up a cron to have this run at the end of the day.

Cheers!


#!/usr/bin/perl -w
#
# chkmysqlerrlog.pl v1.0
# 2009-02-17
# Adrian J. Cruz
#

use strict;

my $dbserver = "dbserver.domain.com";
my $email_to = "email\@domain.com";
my $email_from = "mysqlerr\@domain.com";
my $errlog = "/usr/local/mysql/data/dbserver.domain.com.err";
my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) =
localtime(time);
my $yy = sprintf("%02d", $year % 100);
$year += 1900;
$mon += 1;
if ($mon < 10) {
$mon = 0 . $mon;
}
# set $today in date format: YYMMDD to get today's errors
# e.g.:
# 090217 18:38:22 [ERROR] mysqld: Can't open file: 'ts.MYI' (errno: 144)
my $today = $yy . $mon . $mday;
my $todaylog = $year . $mon . $mday . $hour . $min;

# parse through the mysql err log for ERRORs
my @errors;
open(IN, $errlog);
while () {
chomp;
if ($_ =~ m/$today.+(\[ERROR\].+$)/) {
push(@errors, $1);
}
}
close(IN);

# we only want the unique errors
my %sorthash;
@sorthash{@errors} = ();
my @error_msgs = keys %sorthash;

my $email_message = "$todaylog\n";
for my $e (@error_msgs) {
$email_message .= $e . "\n";
}

if ($email_message && @error_msgs) {
sendMail( $email_to, $email_from,
"$todaylog $dbserver ERRORS", $email_message );
}

sub sendMail {
my ($to, $from, $subject, $message) = @_;
my $sendmail = "/usr/sbin/sendmail";
open(MAIL, "|$sendmail -oi -t");
print MAIL "From: $from\n";
print MAIL "To: $to\n";
print MAIL "Subject: $subject\n\n";
print MAIL "$message\n";
close(MAIL);
}


NOTE: with a little bit of work you can even include the Warnings; we weren't too worried about those.

Tuesday, February 17, 2009

Don't Panic Filesystem, It'll Be Okay

"Ouch! My G1 isn't reading my microsd card! Nooooo! I need to listen to something on the train! Oh wait, let me just use iMeem for now."

That's not a good way to start off the first day back from a long weekend. But unfortunately, that's how I started it off. So now, as I'm finally having time to sit down and troubleshoot why my microsd card was not reading, I can tell you all about how I fixed it!

So, I plug in my G1 to my laptop here at work. Here's my dmesg output:

acruz@acruz:~$ dmesg | tail
[28938.531732] sd 6:0:0:0: [sdc] Attached SCSI removable disk
[28938.531824] sd 6:0:0:0: Attached scsi generic sg3 type 0
[28938.969366] FAT: Filesystem panic (dev sdc1)
[28938.969387] fat_get_cluster: invalid cluster chain (i_pos 0)
[28938.969392] File system has been set read-only
[28941.483468] FAT: Filesystem panic (dev sdc1)
[28941.483493] fat_get_cluster: invalid cluster chain (i_pos 0)
[29038.322352] FAT: Filesystem panic (dev sdc1)
[29038.322365] fat_get_cluster: invalid cluster chain (i_pos 0)
[29038.322372] File system has been set read-only


Whoa...a panic is never any good in linux. Hmmm...all of my data is intact. Okay, no problem then. All I need to run is a quick fsck.vfat -a.


acruz@acruz:~$ sudo fsck.vfat -a /dev/sdc1
dosfsck 2.11, 12 Mar 2005, FAT32, LFN
FATs differ but appear to be intact. Using first FAT.
/albumthumbs
Contains a free cluster (4). Assuming EOF.
/music/P/mp3/Pimsleur01.mp3
Contains a free cluster (43543). Assuming EOF.
Reclaimed 1552 unused clusters (50855936 bytes) in 17 chains.
Performing changes.
/dev/sdc1: 1105 files, 127709/244416 clusters


Well, seems like I had a few files that were probably corrupted when I transferred them last. Sheesh! Well, at least now fsck fixed it! Now, all I needed to do was umount/mount and then all looked better again!

Here's the healthy dmesg output:

acruz@acruz:~$ dmesg | tail
[30213.598301] sd 7:0:0:0: Attached scsi generic sg3 type 0
[30222.667142] sd 7:0:0:0: [sdc] 15659008 512-byte hardware sectors (8017 MB)
[30222.669098] sd 7:0:0:0: [sdc] Write Protect is off
[30222.669107] sd 7:0:0:0: [sdc] Mode Sense: 03 00 00 00
[30222.669112] sd 7:0:0:0: [sdc] Assuming drive cache: write through
[30222.673118] sd 7:0:0:0: [sdc] 15659008 512-byte hardware sectors (8017 MB)
[30222.675136] sd 7:0:0:0: [sdc] Write Protect is off
[30222.675147] sd 7:0:0:0: [sdc] Mode Sense: 03 00 00 00
[30222.675151] sd 7:0:0:0: [sdc] Assuming drive cache: write through
[30222.675164] sdc: sdc1


That looks much better! w00t!

Cheers!

Android Phone 2: HTC Magic

Finally, a second Android-based phone! You can read (and view photos!) CNet UK's first look at the phone here.

Well, right now I'm just curious if the battery life is better than the HTC Dream (G1). Other than that, well I could care less for a touchscreen-only phone. (One of the main reasons I didn't get an iPhone.) Other than that, the HTC Magic looks pretty slick and seems to be a nice addition to the Android family.

Cheers!

Monday, February 16, 2009

Rename Multiple Files With Perl

So here's the problem: I have a directory full of files that have special characters. In this particular case they were spaces and pounds. So of course, for some reason I can't find a working way to do a quick bash script. I was trying to do a for x in `ls` ; do ... but because of the special characters my filenames weren't being read properly. Wonderful!

So, after maybe a good 20 minutes of trying to find the right shell commands to use, I gave up and wrote this quick perl script.

So...anybody wanna give me a hint on how I'd be able to do this in a shell script?


#!/bin/perl
#
$dir="/home/drin/tmp/";
opendir(DIR, $dir) || die "Couldn't open $dir";
@files = grep {/.doc/ && -f "$dir/$_"} readdir(DIR);
closedir(DIR);

for $x (@files) {
$y = $x;
$y =~ s/\#//g;
$y =~ s/\ /_/g;
print "Renaming $x to $y\n";
rename ($x, $y);
}


Cheers!

Friday, February 13, 2009

Happy 1234567890 Epoch Time (EST)


"Wait for it... wait for it... come on... getting close... aha! There we go!"

Yay, we had an "epoch" of a time here on the east coast. hehehe

Cheers!

Wednesday, February 11, 2009

Quick Perl for Bash Boolean Woes

My task is simple; delete files that are in one data center's SAN but not in any of the others. Though, to be sure that the files really do not exist in the other data centers, I want to do a little logic to only delete the file if they do not exist in any of the data centers.

So, maybe my brain isn't working properly or I just knew that perl would've been quicker, but I couldn't find the right way to use the -a (AND) bash boolean operator. If anyone knows how I could use bash's booleans to test if a file does not exist in location a, b, c, ..., n+1, then I would be more than curious to know!

This did NOT work: if [ ! -f $x -a ! -f $y -a ! -f $z ] ; then ...

I ended up doing the following in perl fyi:


#!/usr/bin/perl -w

open(FILE, "files_to_del.txt") || die("Could NOT open!");
@cfdata=;
close(FILE);

$ny1="/mnt/ny1/www/";
$ny2="/mnt/ny2/www/";
$sj1="/mnt/sj1/www/";
$tx1="/mnt/tx1/www/";

$count=0;
foreach $filepath (@cfdata) {
chomp($filepath);
$ny1file=$ny1 . $filepath;
$ny2file=$ny2 . $filepath;
$sj1file=$sj1 . $filepath;
$tx1file=$tx1 . $filepath;
if (! -e $ny1file && ! -e $sj1file && ! -e $tx1file) {
$count++;
print "DELETING: $count $ny2file\n";
unlink($ny2file);
}
}


Cheers!

Tuesday, February 3, 2009

Helix3 2008R1

So, I recently was asked to get data off of a laptop for a friend. I took the laptop home with me and noticed that I left my old Helix CD at my parents' house. "Not a big deal, I can just download the latest iso!"

So first thing I noticed was the use of Gnome, but wait, look further, it's actually Ubuntu! Cool! But wait, I just want to recover some data and I don't need Gnome to be running, it costs too much overhead! Bah!

I was not even able to append a boot parameter to boot in console-only mode (or at least I wasn't able to find one). Fine, I got the files off anyway!

I also noticed an "install" option. That's cool! Maybe I'll be more curious and even test it out.

Cheers!

UPDATE
Ok, silly me! To go about booting into console-only mode the answer was really just in my face! If you hit F6 at the Helix menu, you can edit the boot parameters. Just delete splash and that's it! Though, Gnome still loads so you'll just have to do a Ctrl+Alt+[F1-F6] to get to one of those console screens.

Thursday, January 29, 2009

Winmail.dat Attachments For Linux Thunderbird

This is only going to be a rant and a link to Andrew Beacock's blog post solving that pesky winmail.dat attachment issue for linux users.

I never had this issue until this morning when a project manager sent over an email with an attachment.

Sheesh Microsoft, can we stop making the proprietary apps please!?

Friday, January 23, 2009

Regular Expressions in MySQL

Do you need to limit your SQL results and LIKE '%foo%' isn't giving you a narrow enough result list?

Well, I'm sure you know how to narrow down those SQL results with a regex! So all there is to it is the REGEXP operator! Yay!

Example:

SELECT * FROM table WHERE column REGEXP 'data[0-9]';


You get the point; now enjoy! Cheers!

Friday, January 16, 2009

MySQL Search and Replace

I recently had to update some information in MySQL. Yes, I'm sure you're thinking, "yea, so...that's easy!". But, the way the data was entered in the column was more like an entire paragraph! The information I needed to change was only a last name in the paragraph. So, obviously I didn't want to type out the entire paragraph again with my edits just so i can UPDATE the table. So instead, MySQL has a nice replace function built in. Yay!

The code will look as follows:
UPDATE your_table set your_column=replace(your_column,'Smith','Jones');

And now your_column will replace all occurrences of "Smith" with "Jones".

Cheers!

Wednesday, January 14, 2009

Hello, Android SDK


So in an attempt to stay on budget and yet keep myself entertained through this recession, I finally set up my Android SDK on one of my computers. I also finished up my first app! Ok, ok, so it's the generic "hello world" app that all newbies start off with, but hey I still haven't decided what I'd like to build. It's always just fun to tinker around a system anyway.
Let the fun begin!

Cheers!

Tuesday, January 13, 2009

Where's My Ubuntu Screensaver Admin?!

I think this will probably be more of a reminder for me, but in case anyone else has this curious issue with Ubuntu Gutsy Gibbon, then read on!

For whatever reason, I do not have "Screensaver" listed under System>Preferences. I'm using XScreenSaver since gnome-screensaver seemed to just freeze after any amount of time. Anyhow, if you need to configure your settings for XScreenSaver all you need to do is type in xscreensaver-demo in a shell. Tada!

Sheesh, I had to google that every time and then follow it up with a "doh!" once I read what to do!

Cheers!

Tuesday, January 6, 2009

It's 2009 and I Am Back!

Well, it has been over two months since my last post. I have been unfortunately out of ideas on what to blog about. So, perhaps I will slowly stray away from my usual format of blogging about technical recipes and code to bantering on about anything random. Hmmm...we'll see.

Anyhow, happy 2009! I was given an awesome present of a [t-mobile] G1. Yay! I've only had it for a little bit over a week now, but it is awesome. I've managed to get root thus far; but stay tuned for more updates.