Quick Crontab Tips and Tricks

If you need to perform an action at a specified time on a Unix or Linux server, then you need crontab, also referred to as cronjobs or simply cron. We use crontab for lots of automated tasks throughout the day and thought we would share some of the most common questions we get asked and answers we give – bear in mind this isn’t an all encompassing introduction to crontab, but more of an explanation of common tricks you might need when you already have a basic understanding.

First it’s probably worth covering off a few basic things about how to add a job to the crontab – we will reference the ways to do it on an Amazon EC2 instance running Amazon Linux AMI, but they are very similar across different Linux server installs:

Basic crontab tricks and tips

  • Crontab can have issues with running on usernames with an underscore in them – which will be an issue if you are using the default username of ec2_user that Amazon assigns. To avoid any potential hiccups type:
    sudo su

    This will log you in as the root user and you can then create your cronjobs from there. Make sure to enter the password for the root user if prompted

  • You can edit your crontab by typing:
    crontab -e

    This will open the crontab file in the default editor (usually VI) without having to go searching through the filesystem for it, and if you don’t have one already it will set it all up for you.

  • The basic crontab format is as follows:
    Minute | Hour | Day | Month | DayOfWeek | Command

    So if you wanted to test the most basic of cronjobs you could perform a directory listing of the /etc directory every minute of every day with the following command:

    * * * * * ls /etc
  • Want to email the output of the cronjob to a certain single email address easily? Just add the following line above the cronjob in question:
  • Want to see a log of what the crontab is doing in order to debug any potential issues? Just use the following command. It will show the last few lines of the log and update if a cron runs again while it is open. To quit out of it just press CTRL and C together:
    tail -f /var/log/cron

Running Crontab every 2 hours between 9-5

One of the most common things we are asked is how to run a cronjob every two hours between a certain time frame and it isn’t so clear from the information currently available so we will explain it here as simply as possible. The command you will need is:

00 9-17/2 * * * ls /etc

This will run our command to list the contents of the /etc directory every two hours between 9am and 5 pm. Below is an explanation of the command in more detail:

  • 00: This is the minute of the hour to run the job – in this case 00 in every qualifying time period
  • 9-17/2: This is the complex part that trips people up. The 9-17 tells this command to run between 9am and 5pm on a 24 hour clock (hence 17) and the /2 ensures that the script only runs every second hour.
  • *: Ensures this runs every day.
  • *: Ensures this runs every week.
  • *: Ensures this runs evey day of the week – If you only wanted this to run on weekdays you could change this to 1-5.
  • ls /etc: Lists the contents of the etc directory – the command we want to run – obviously you will probably want to run something more useful!

Sending cronjob output to multiple email addresses

Another issue people face is sending the output of a cronjob to multiple email addresses. Often people will tell you to edit the MAILTO command described above to include each email address separated by a comma, however this is flakey at best and will often result in the cron failing with an UNSAFE message in the log file. The best way to achieve this is to edit your email aliases file, which is actually very simple to do:

  • Type
    vi /etc/aliases
  • If you are running your crontab as the root user as suggested, you should see a line at the bottom of this file that defines the root alias. Change it as follows:
    root: email1@email.com,email2@email.com,email3@email.com
  • The output will now be sent to all three email addresses.
  • If you are running your crontab as a different user, just enter a new line at the bottom of the file with the username you are running it as before the colon. Don’t forget to check your crontab afterwards to ensure you aren’t overriding this with a MAILTO command.

Tagged , , , , , , , | Leave a comment

Fixing Mac OSX Mountain Lion

Usually Mac OSX updates are great – tons of useful new features and incredibly cheap compared to any new version of Windows that comes along, however with Mountain Lion it looks like Apple have broken a string of successful OSX releases with one that would be only okay, if it were not for some glaring issues that are annoying loyal customers. Below are a couple of the issues we’ve faced since upgrading and how we’ve fixed them up – although some have come at a cost.

I now get a GrowlMail Plugin disabled message on Mail

GrowlMail was a great piece of software that allowed the mail program to send alerts so I could spot easily when I had a new email. With the new Mountain Lion, Apple have their own Notification Centre built in that takes care of this kind of functionality, unfortunately in doing this they have removed the ability to disable the GrowlMail plugin within the Mail application, resulting in an annoying warning every time you start Mail.

How to fix it

To fix it we need to delete the GrowlMail bundle manually which is pretty easy. In Finder go to:


And delete the folder:


The RSS reader functionality in Mail has disappeared

This disappeared without warning causing a bit of a stir, and so we now have to rely on third party apps to read RSS feeds, at least for now.

How to fix it

Google Reader seems to be accepted as the best RSS reader replacement. They provide a web based interface that allows you to read, subscribe and organise RSS files for free. If you are used to having a client application and want to stick with that way of doing things, a good choice is Reeder, they have Mac (£2.99), iPhone (£1.99) and iPad (£2.99) versions and they all stay synced with each other thanks to the web based Google Reader account.

There is no way of finding out what feed URL’s you had subscribed to previously in Mail

The RSS functionality is gone lock, stock and barrel and that means that if you aren’t sure what feeds you had subscribed to when populating your new Google Reader account, you may miss a few.

How to fix it

Use Finder to visit the following folder: /Library/Mail/V2/RSS

Here you will find a bunch of .mbox files that are named in the same way as the feeds were in the previous Mail app. If you click on these you will find a file called Info.plist in each. Open it in TextEdit and you should see something like this:

<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
	<string>Apple Hot News</string>

The part with the feed URL you will need to add to Google Reader is underneath RSSFeedURLString. In this particular case the URL would be http://images.apple.com/main/rss/hotnews/hotnews.rss.

Airplay Mirroring is restricted to newer Macs

A highly awaited feature for those with Apple TV was the ability to mirror their display wirelessly to a TV, however what wasn’t made clear before launch was that this would only be available on the much newer Macs, older ones apparently don’t have the necessary chipset to run it according to Apple.

How to fix it

Thankfully some clever developers have a software workaround available that works fantastically called AirParrot. Once again the full version will cost you ($9.99 USD) however there is a free version so you can try for a limited time before you buy to ensure it works as well for you as it does for us.

Leave a comment

Do you have Google Apps on a cPanel server and can’t send email from it? Give this a try

Note: This is quite a specific technical issue, but one that took some serious research to solve that we had for one of our clients recently. I’m sure that this problem is relatively commonplace given the combination of technology used and the information on how to fix it is scarce, so hopefully this will answer a real head-scratcher for anyone out here experiencing the same problem!

We were experiencing a very strange issue with one of our clients recently in that they were unable to send email from any of their mail forms on their website to email at their own domain. In short, when they filled in an email form on example.com that was intended to send an email to enquiries@example.com it just failed silently. If they had it email example2.com instead it worked fine. The setup they have is as follows:

  • PHP Website on shared hosting with a cPanel admin system.
  • Google Apps email.
  • Domain and nameservers hosted by their domain registrar.
  • DNS records on registrars nameservers pointing to shared hosting for the website (@ and WWW records).
  • DNS records on registrars nameservers pointing to google apps for the email (MX records).

This was clearly an issue with the hosting company (who shall remain nameless!) and they were completely unaware of how to fix the problem after several conversations, so we took on the task of diagnosing the issue. The first thing to do was check what happened when email was sent from the website to the domain. This was achieved by using the “Email Trace” tool in their cPanel.

Typing enquiries@example.com into the email trace showed us the actual root of the issue. The email trace gave us an error of “virtual_aliases via virtual_aliases router forced address failure”. This particular issue would actually have identified itself as “PHPMAILER_RECIPIENTS_FAILED” if the error were not being suppressed by the web hosts server configuration when the enquiry form was filled in on the website.

Now the root of the problem was clear. Essentially the web server was misconfigured. Due to the site of the domain being hosted on the web server it was also assuming that it was handling the email for that domain as well which wasn’t the case.

The way to solve this on most linux servers that you have root access to would be as follows:

  • Edit the /etc/localdomains file.
  • Remove example.com from the file.
  • Edit the /etc/remotedomains file.
  • Add example.com to this file, ensuring that you enter it in the same way it was listed in the localdomains file.

This would make the server check for the MX records on the registrars nameservers before sending the email, rather than assuming the settings on itself were the ones to be used. However this wasn’t possible on this clients website due to the use of cPanel for server admin and the lack of root access to the server as it was shared. Fortunately we can use a workaround of duplicating the domains MX records in the cPanel to achieve the same effect. To do this just do the following:

  • Click the MX Entry tool in cPanel
  • Select the problem domain
  • Add the same MX records that are specified in the registrars DNS. Ensure that any that were there originally are deleted.
  • Now you can go back to the Email Trace tool and try again – you should see a success message, and any emails sent from enquiry forms on the site will work perfectly!

Tagged , , , , , , , , , | Leave a comment

Setting up a domain on Amazon Web Services EC2

This post follows on from our previous post Amazon AWS EC2 Easy Web Server Set Up Guide, showing the natural next step of pointing a domain to your new cloud server.

Now we have a new Amazon Web Services EC2 server fully configured as a webserver, let’s point a domain to it, as this has it’s own idiosyncracies using the AWS system.

Setting up AWS to provide an IP address for your DNS settings

  • First we will need to assign an IP address to our server using Amazon’s Elastic IP system, so click Elastic IP’s in the left hand menu.
  • In the top menu under “Addresses”, click “Allocate New Address”.
  • Ensure “EIP used in” is set to “EC2″ and click “Yes, Allocate”.
  • Click the tick box next to the new IP address that has appeared in the main left pane and press the “Associate Address” button in the top menu.
  • In the “Instance” drop down, select your webserver that we set up previously and click “Yes, Associate”.
  • That’s it for management on the AWS side of things, however please note that now when accessing your webserver through SSH, SFTP or MySQL, your public DNS address that you were using for the connection will have changed. At this point it is probably better to just use the new IP address to connect that you have just associated with your webserver.

Setting up your DNS

  • We won’t go into too much detail here as there are far too many domain and DNS providers (GoDaddy, 123-Reg to name a couple) to cover off every eventuality, just ensure that the IP address you have created in AWS Management is entered for the @ and www records for your domain.

Setting up some VirtualHosts

  • You may want to set up more than one domain on the server, so let’s set up a VirtualHost to show the way to do this on an EC2 instance.
  • SSH to your EC2 instance as described in our previous article.
  • Your default webserver configuration after going through the previous steps should load additional configuration files from the “conf.d” directory, so let’s switch to it with the command “cd /etc/httpd/conf.d”.
  • Type “sudo su” and hit enter to give us all the privileges we might require.
  • Type “vi httpd-vhosts.conf” and hit enter. This will create a new file where we will store the virtual hosts information.
  • Press “i” to enter insert mode and type the following, replacing your-server.com with the domain name you want to use:

NameVirtualHost *:80
<VirtualHost *:80>
        ServerName www.your-server.com
        ServerAlias your-server.com *.your-server.com
        DocumentRoot /var/www/html/your-server.com

  • Press escape, then type “:wq!” to exit the editor.
  • This tells the apache server to look in the /var/www/html/your-server.com directory for the files for any request to the your-server.com domain.
  • Let’s check that the rules you’ve typed in are working correctly by typing “httpd -S”, this should tell you that the syntax is ok.
  • Finally, we just need to restart Apache to get it to take notice of the new configuration file. Type “apachectl restart”.
  • And you’re done! Type your domain name into your browser and you should see the files from the appropriate directory!

Tagged , , , , , , , , | Leave a comment

Amazon AWS EC2 Easy Web Server Set Up Guide

Please note: This is going to be another of those techie posts – the previous ones have been well received so since we have spent some time with this particular combo of hardware and software lately we thought we would share the info here.

Lately we have been trialling Amazon Web Services, and particularly their cloud server product called Amazon EC2. Now is a pretty good time to get your feet wet since they are offering a completely free tier of usage, allowing you to play around with cloud computing without any cost.

This guide will detail how to set up a web server with a LAMP (Linux, Apache, MySQL, PHP) stack on a free Amazon EC2 account, and as a bonus will also let you know how to connect to it with Coda 2 (our current favourite Mac coding environment) to start getting your web site online.

So without further ado here is the guide.

Starting up a server

  • Go and sign up for Amazon Web Services. You will need to enter a credit card, but it won’t be charged as we will only be using a server that is on the free usage tier.
  • Sign in to the AWS Management Console (It’s a link under the dropdown in the top right) with your user name and password.
  • Click on EC2: Virtual Servers in the Cloud – the first option under Compute & Networking. Amazon offers a baffling amount of services, but for our purposes (and the purposes of most people) EC2 will be the only one you need.
  • Click the button marked “Launch Instance”. Instances are basically servers, so what you are doing when you are launching an instance, is effectively starting up a server.
  • Click “Classic Wizard” and continue, as its a bit more explanatory than the quick launch one, though that might be a better option when you get to grips with it.
  • Select a 32 bit Amazon Linux AMI type server – it has the star next to it, so that means it’s eligible for the free tier if you select a Micro (lower powered) instance.
  • Ensure number of instances is set to 1 and type is set to Micro (so it’s free) . The availability zone doesn’t really matter here so select no preference on that dropdown and hit continue.
  • On the next page, all of the defaults are fine, so we just hit continue again.
  • Let’s give this server a name for when we are administering it later – let’s go for Webserver. Enter Webserver in the value box next to name and hit continue.
  • Now on to some security. In order to securely access the server from your local machine, you will need to create a key pair, so under Create a New Key Pair enter a name for it. It doesn’t matter what it’s called really, but for this tutorial let’s call it “myawskey” and click “create and download your key pair”. This will download a file called myawskey.pem to your computer. Put it somewhere you will be able to find it later on.
  • You now need to add some security to your new server. These come in the form of security groups, so let’s select an existing group (AWS offers you some defaults) to make it easy. Select “quick-start-1″ and click Continue.
  • Next you will get a screen showing everything you’ve just selected – as long as nothing looks particularly odd you can click “Launch” and your new server will be up and running within a minute or so!

Adding all the software you need to your new server to make it into a Webserver

So we now have a brand new server, but it needs some software installing so that it can serve web pages for us, so that’s the next step. Here we will be connecting to the server with something called SSH and installing the LAMP stack Apache, MySQL and PHP (we already have the L of the LAMP stack, in that the servers operating system is Linux). These instructions are Mac-centric, although the same commands could be used with a different terminal application on a PC.

  • First of all, let’s get familiar with a few things in the AWS console. Your server (an instance) is listed under the instances menu item on the left hand side. Once you click this you will see it on the right. Click the tick next to it and you will see some information about it appear at the bottom. Scroll down to “public DNS” and copy it. This is the address you will be using throughout to connect to the server.
  • Let’s take the key pair you created earlier and put it in the default directory for key pairs so that our Terminal app will apply it automatically when we try to connect to our server. Copy the myawskey.pem file to “//.ssh/”. For example, mine would be /anthony/.ssh/
  • Start up the Terminal application on your Mac and type “ssh ec2-user@“. For example it may look something like this: ssh ec2-user@ec2-00-00-00-00.compute-1.amazonaws.com
  • Now we are in we’ll need to run some linux commands to install the software, but first let’s give ourselves the appropriate permissions to do all the installs by typing “sudo su”.
  • First off lets install any updates to the packages that are already on the server by typing “yum update”. Say yes to all the updates until it has finished running.
  • Now let’s install apache with “yum install httpd”.
  • Now let’s get PHP installed: “yum install php php-mysql”.
  • And finally let’s install mySQL: “yum install mysql-server”.
  • Now let’s start those services, beginning with the webserver: “service httpd start”.
  • And ending with MySQL: “service mysqld start”.
  • Now we need to do some work to add a user that can manage the database from your favourite MySQL program on your local machine, so let’s run the following to start the program: “mysql”
  • Decide on a username and password for your admin account and enter this command (in the example we are using username “admin” and password “password” – please pick something a bit more inventive!):
  • Hit enter to go onto a new line and type “\g” to execute the command, then \q to quit.
  • While we are on the server, let’s do a couple more setup tweaks that you may need when you start to add your webpages to the server, enabling URL rewriting and PHP short tags.
  • To enable URL rewriting type “vi /etc/httpd/conf/httpd.conf” to open the apache config file in the vi editor.
  • Scroll down to where it says “AllowOveride None” inside the DocumentRoot Directory Directive, normally
  • Press i to enter insert mode and change “AllowOveride None” to “AllowOveride All”. Press escape, then type :wq! to save and quit the editor.
  • To enable PHP short codes, type vi /etc/php.ini to start the vi editor with the appropriate file.
  • Scroll down to where it says short_open_tag = Off and change it to short_open_tag = On by pressing i to go into insert mode.
  • Press escape, then type :wq! to quit the editor.
  • Lastly, we will need to restart the webserver so those last two changes that effect. Enter “service httpd restart”.
  • That’s it! We are all done on setting up the LAMP stack.

Using Amazon AWS Management Console to allow web and MySQL access to the server.

So we have everything set up, but our security settings mean we can’t actually access the server through our browser (to view the webpages) or through our SQL program to add any databases, so let’s rectify that.

  • Go back to your browser window that has the AWS Management Console open and click “Security groups” under “Network and Security” on the right hand menu.
  • Click the tick next to “quick-start-1″ to select the security group we applied to our webserver.
  • Select the “Inbound” tab in the details pane at the bottom left. You will see that SSH access on port 22 is set up, which is what we’ve just used to access the server and install all of the software.
  • Now lets add HTTP access so you can view web pages in a browser. Select “HTTP” under “Create a new rule”, Leave the source as it and click “Add Rule”.
  • Save the rule by clicking “Apply Rule Changes”.
  • Now lets add remote MySQL access. Select “MYSQL” under “Create a new rule”, Leave the source as it and click “Add Rule”.
  • Save that rule by clicking “Apply Rule Changes”.
  • All done! You should now be able to visit http:// and see an apache test page from your server. You should also be able to use your favourite SQL program to access with the admin username and password we set up on the server.

Last part: Uploading files to your webserver

When it comes to uploading files to your webserver, most would use FTP, however for added security and convenience (since it doesn’t exactly play well with FTP) Amazon AWS uses SFTP instead. You can check that SFTP is working with the command line, but we had some issues setting it up in Coda 2, so we’ll just cover that briefly here for anyone with the same problem.

  • First let’s check SFTP is actually working. Go to your terminal and type “sftp ec2-user@“. For example this may look like: sftp ec2-user@ec2-00-00-00-00.compute-1.amazonaws.com.
  • If it connects, then SFTP is working fine, you would set this up in Coda by going to site settings and doing the following steps.
  • On “Site”, set protocol to “SFTP”
  • Set Server to ““.
  • Set User Name to “ec2-user”.
  • No need to enter a password or select a key file in the next box, as we put the key file in the default directory earlier on and Coda is smart enough to look there.
  • Coda should now be set up to upload files via SFTP.

As of the time of this writing Coda 2 had a bug which stopped it from connecting to this particular type of SFTP, which wasn’t a problem in earlier versions. Thankfully the Coda developers Panic, are aware of it, and it should be fixed in an official update soon, however in the meantime one of the developers offered an unofficial release on their bug tracking platform that should solve the problem. You can download the unofficial fix here.

That’s it! Enjoy your new free webserver from Amazon, and when you’ve tinkered around with it some more you will be able to make more of what cloud hosting has to offer!

We’ve just published a further post on how to use a domain name with your new cloud server. Why not take a look?

Tagged , , , , , , , , | 1 Comment

Martin Lewis’ Moneysavingexpert sold to Moneysupermarket – A Unique Take

Martin Lewis, consumer champion of daytime TV and Moneysavingexpert.com (MSE) fame has agreed to sell MSE to Moneysupermarket Financial Group, the company behind financial comparison powerhouse Moneysupermarket.com (MS). The details of the deal are summed up well by this Affiliates4u blog post, but the financials of the deal – a possible £87 million payout – are essentially as quoted below (from the linked article):

The purchase is to be structured partly with a £35 million cash payment upfront and an estimated 22.1 million in MoneySupermarket.com shares. In addition there’ll be a deferred consideration of up to £27 million, which will take into account the future performance of MoneySavingExpert.com against non-financial metrics. The targets are said to be based on growth and trust in the site.

In my (Anthony Ball’s) time in previous companies and employment, I’ve worked very closely with both of these companies at one time or another and have crossed paths and worked with and under many others who have been involved with one company or the other, as well as having been in a similar situation myself of selling what I had founded as consumer led initiatives to larger operations, though on a far smaller scale than this.

I believe Martin has done what he believes is the right thing for the right reasons, a feeling I can certainly identify with, providing security for the companies long term ambitions, himself and his staff. Indeed he has tackled the major issues that any user of MSE would be concerned about in this Q&A, including an agreed independent editorial stance, and a fantastic donation to charity from the proceeds. In my experience though, this good groundwork could all quite easily be for nought.

At the end of the day MSE, a consumer champion with the goal to provide the best advice to consumers regardless of the financial compensation it receives, is being taken over by a PLC company with shareholders to answer to and profits targets that must be consistently met or exceeded. These two goals are polar opposites and threaten to be the nail in the coffin of truly independent consumer led financial advice from MSE.

Up until now it has only been Martin’s unwavering belief in doing the right thing for consumers in helping and educating them about their finances that has kept MSE on the righteous path he has set, as it was when I had total control of various projects and websites throughout the years, however upon the sale of those projects, for what I believed be those same “right reasons” at the time, that influence and credo was slowly chipped away for monetary gain by committee with the impact of my voice lessening day by day.

Clearly Martin has become somewhat of a celebrity for his trailblazing, and as such should be a far louder voice than I in these situations, but the company he has sold to is also far bigger than the companies that slowly whittled away my vision. As a last note all I would say to Martin given the chance is:

Your work is not done yet, for all the good things you have put in place in the sale contract it only starts here. To maintain the integrity of MSE, if you want to and if you can, will require as much, if not more work than you are handing off by selling it.

I hope I’m wrong about this, I hope that in this case, doing the right thing can win over capitalism, but it will take a lot of work and I hope Martin is cut out for this very different challenge. As for me, I’m glad to be out of that situation and in control of R N D M R providing products and services with the consumers and clients best interests at heart once more.

Tagged , , , , , | Leave a comment

Fixing poor website design with GreaseMonkey

Some of the client work we do is with schools, working on projects to improve IT literacy of students through providing them with extranets they can use to start to learn about publishing information on the web, within the safety and security of a gated community of their friends and teachers. From this we also provide management information software and administrative capabilities for head teachers and other staff to ensure that they have full visibility and control of what happens.

As an aside, we were asked by one of these clients if there was something that could be done to improve the display on a site provided by the government for them. They were having real issues using it for it’s intended purpose and requests to improve it seemed to be falling on deaf ears.

An assessment of the site showed an interactive graphing application intended for presentations to staff and school governers that was crippled by several serious design flaws, making what was a very useful tool into something that was simply unusable. Below is a list of some of the main issues we found:

  • Approximately 50% of the screen was taken over with header information and portal logos
  • The remaining 50% of the screen was mainly filled with large titles for the particular graph selected, leaving a minimal amount of that graph actually showing on the page.
  • The overall viewport for the graph was too small to show the entire graph at any one time, causing heavy scrolling to be used in order to see the data plots and the reading on the axis.
  • Nested framesets caused further issues throughout. While framesets can still have their uses in very specific scenario’s, to nest them in this particular way was causing several issues that caused expected browser behaviours to fail.
  • Web standards for scrolling were broken and replaced with proprietary methods that were intended (yet failed) to improve display and unfortunately hindered the ability to use any browser zoom and full screen options to display more of the graph.

So what was the best way to quickly enable the site to become the valuable presentation tool that it could be? Enter the oddly named GreaseMonkey!

GreaseMonkey started life as an extension for Mozilla Firefox, one of the first wave of browsers that weren’t built in to operating systems to gain traction with the public due to it’s non-profit open architecture that allowed a community of programmers to be able to improve it collaboratively. GreaseMonkey essentially allowed a way of being able to execute JavaScript on a web page after it had loaded in order to change it’s look or functionality and is still available today, so if you are a Firefox user, we highly recommend you take a look at the GreaseMonkey Firefox Extension.

Our current web browser of choice is the free Google Chrome due to it’s overall speed and adaptability, and thankfully the ability to run GreaseMonkey scripts is already built into it, meaning we could use it for this particular issue. Installing GreaseMonkey scripts is a piece of cake too, just drag any script that follows the GreaseMonkey naming convention of name.user.js onto your Chrome browser and you will be asked if you would like to install it. We called our script report.user.js. You can always click the “wrench” icon and go to Tools, then Extensions to manage or delete them down the line.

You can learn everything you need to know about GreaseMonkey and how to write a GreaseMonkey script over at the GreaseMonkey wiki, however below is the script that was created to fix our particular problem to give you an idea of how it would go together. Website URL’s and names have been removed and replaced with XXX to protect the guilty!

// ==UserScript==
// @name          Improve report display for XXX
// @namespace     http://www.rndmr.com
// @description   Overwrites styling used on XXX and drastically increases the viewport for the graphs
// @homepage      http://www.rndmr.com
// @author        RNDMR Limited
// @icon          http://www.rndmr.com/favicon.png
// @include       https://www.XXX.org/Reserved.ReportViewerWebControl.axd?*
// @include       https://www.XXX.org/reportsandanalysis/ViewReport.aspx*
// ==/UserScript==
//If we are working on the main report page with associated controls and logos at the top
if (window.location == 'https://www.XXX.org/reportsandanalysis/ViewReport.aspx') {
	//kill header/logo area
	var header = document.getElementById('header');
	header.style.display = 'none';
	//kill report filtering options
	var reportfilter = document.getElementById('reportFilter');
	reportfilter.style.display = 'none';
	//kill export options in gold box
	var exportoptions = document.getElementById('_ctl0_ContentPlaceHolderContent_rvMain__ctl1');
	exportoptions.style.display = 'none';
	//Increase base size of report frame
	var reportframe = document.getElementById('ReportFrame_ctl0_ContentPlaceHolderContent_rvMain');
	reportframe.style.height = '1000px';
} else {
	//we are working with the actual report that is shown on the iframe
	//set report div height and report cell to auto since current setup overwrites this for some reason
	var innerreportdiv = document.getElementById('oReportDiv');
	innerreportdiv.style.height = 'auto';
	var reportcell = document.getElementById('oReportCell');
	reportcell.style.height = 'auto';
	//set all tables to auto height since they are also overwritten
	var tables = document.getElementsByTagName('table');
	for (i=0; i<tables.length; i++)
	  tables[i].style.height = 'auto';
	  tables[i].style.overflow = 'auto';

Tagged , , , , , , , | Leave a comment

CakePHP tricks: Baking everything with scaffold

One of the best parts of CakePHP, our current rapid development framework of choice, is scaffold. This is a built in way to quickly add a front end onto any database (or set of models) to better visualise whether or not an application is going to work in the real world. This allows you to rapidly prototype ideas and make more informed decisions on the way forward.

We prototype several ideas a month so for us, more speed means more time to assess and refine an idea. In order to provide us with as much time as possible we also use CakePHP’s excellent “Bake” console application to quickly generate models, controllers and views in conjunction with scaffold.

However, for all CakePHP’s good points, one thing that lets it down is the online documentation. For example the page in the online manual for the bake console application states the following:

Parameters on the ControllerTask have changed as well. cake bake controller scaffold is now cake bake controller public.

As well as not adequately explaining the change (or the format of the parameters for that matter) it also misses out one of the most important commands for rapid prototyping cake bake controller all scaffold. Making a rapid prototype of a web application in early versions of cake was a simple three step process (once you had set up your database connection and security settings):

  • Create the database using the cake naming conventions
  • Run cake bake model all
  • Run cake bake controller all scaffold

Then you could change your database to your hearts content and keep re-baking all of the models as you tweaked the application to come up with something that fits the requirements.

It seems that at some point the method to bake all of the controllers with scaffold in this way has either been removed and not documented properly, or accidentally dropped. Either way, one of the most useful rapid prototyping commands for CakePHP has slipped by the wayside. Not great, so let’s look at how we can build it back into the latest stable release of CakePHP, which is currently 2.0.6.

The file we will need to edit is in


First locate the “all” function and change it as below – the change is highlighted by a comment:

 * Bake All the controllers at once.  Will only bake controllers for models that exist.
 * @return void
	public function all() {
		$this->interactive = false;
		$this->listAll($this->connection, false);
		ClassRegistry::config('Model', array('ds' => $this->connection));
		$unitTestExists = $this->_checkUnitTest();
		foreach ($this->__tables as $table) {
			$model = $this->_modelName($table);
			$controller = $this->_controllerName($model);
			App::uses($model, 'Model');
			if (class_exists($model)) {
				if ($this->params['scaffold']) { //RNDMR.com: If statement to work out whether we requested scaffold or not
					$actions = 'scaffold';
				if ($this->bake($controller, $actions) && $unitTestExists) {

Easy! Now we need to tell bake to expect a new type of parameter, called scaffold, in it’s new way of receiving parameters, otherwise it will just display the help text when we try to use it. Locate the “getOptionParser” function and modify it as below (once again the change is marked with a comment):

 * get the option parser.
 * @return void
	public function getOptionParser() {
		$parser = parent::getOptionParser();
		return $parser->description(
				__d('cake_console', 'Bake a controller for a model. Using options you can bake public, admin or both.')
			)->addArgument('name', array(
				'help' => __d('cake_console', 'Name of the controller to bake. Can use Plugin.name to bake controllers into plugins.')
			))->addOption('public', array(
				'help' => __d('cake_console', 'Bake a controller with basic crud actions (index, view, add, edit, delete).'),
				'boolean' => true
			))->addOption('scaffold', array( //RNDMR.com: New scaffold option added
				'help' => __d('cake_console', 'Bake a controller using scaffold.'),
				'boolean' => true
			))->addOption('admin', array(
				'help' => __d('cake_console', 'Bake a controller with crud actions for one of the Routing.prefixes.'),
				'boolean' => true
			))->addOption('plugin', array(
				'short' => 'p',
				'help' => __d('cake_console', 'Plugin to bake the controller into.')
			))->addOption('connection', array(
				'short' => 'c',
				'help' => __d('cake_console', 'The connection the controller\'s model is on.')
			))->addSubcommand('all', array(
				'help' => __d('cake_console', 'Bake all controllers with CRUD methods.')
			))->epilog(__d('cake_console', 'Omitting all arguments and options will enter into an interactive mode.'));

And that’s it! You should now be able to run the following command to generate all of your controllers with scaffold in one easy step.

cake bake controller all --scaffold

Happy scaffolding!

Tagged , , , , , | 1 Comment

Unlocking an iPhone 3G for free

Recently in our iOS app development we’ve been improving the amount of devices we test on, and as I have an old iPhone 3G from o2 we decided to get it up and working again for testing. The only problem is that we needed it to run on the Orange network, and the iPhone was locked to o2.

Searching the internet for ways to unlock an iPhone 3G throws up a few roadblocks – the first of which is the interchangeable use of the terms unlocking, unblocking and jailbreaking. They are all very different things which is worth covering here for clarity:

  • Unlocking: This is what we wanted to do, all it means is taking a mobile that can only be used with SIM cards from a single network, and using it with a SIM card from another network.
  • Unblocking: This is a largely illegal practice that involves taking a phone that has been blocked using it’s IMEI code (a unique identifier for the phone) by the network due to it being lost or stolen.
  • Jailbreaking: Not illegal but certainly frowned upon by Apple, this allows you to install applications directly to the iPhone 3G, 3GS, 4 or 4S directly, bypassing Apple’s App Store process. Apple has been known to “brick” (stop them from working entirely) iPhones that have been jailbroken when software updates are installed.

After a reasonably extensive search of the internet, navigating the minefield of interchangeably used unlocking/unblocking/jailbreaking it seems that there are several companies claiming to unlock iPhones for a fee of anywhere up to $149 USD, taking anywhere up to 3 weeks for the unlock to take effect. Not great options.

As a last gasp I contacted o2 directly using their live chat service, where I was not only connected to an agent straight away, but also informed that with a few details they would unlock my iPhone 3G for free and it would take 72 hours before I would receive a text to let me know it had taken place within the five minute chat.

Great news, but even better was the fact that less than 24 hours later we had a text telling us that the iPhone was unlocked and to connect it to iTunes as you can see below:

Unlocking SMS from o2 for an iPhone

Upon connecting to iTunes we were greeted with the screen we had been waiting for, confirming that the iPhone 3G had been unlocked from o2 and was now free to be used on Orange, Vodafone, Three, T-Mobile or any other UK network for that matter:

iTunes message showing that the iPhone has been unlocked.

Since we had inserted an Orange SIM card we received another message stating the iPhone had received some updated carrier (network) settings. Once we’d hit OK on that box we got the following message to confirm that the settings had been updated on the iPhone 3G:

iTunes message showing the the iPhone carrier (networks) settings have been updated.

And now, after that relatively painless process we have an iPhone that is functioning perfectly on the Orange network, having been successfully unlocked from o2, absolutely free! Great customer service from o2, and something that certainly makes me consider my next personal mobile phone contract business and who I will place it with.

Tagged , , , , , , , , , , , , | Leave a comment

New Twitter UK Brand Pages

Twitter released the first UK brand pages yesterday with a test group of brands including Sky, Cadbury, EA Sports and Asda.

The new brand pages have the ability to add a top banner just under where the profile information is on a traditional profile, as well as the ability to have a promoted tweet that sits at the top of the timeline, allowing the brand to get across an important message and embed some more media if they wish.

Let’s take a look at what some of the launch brands have decided to do with the new look pages:


EA Sports FIFA Twitter UK Brand Page


A screenshot of the current Sky HD Twitter UK brand page


A screenshot of the current Asda Twitter UK brand page


A screenshot of the current Cadbury Twitter UK brand page

Tagged , , , , , | Leave a comment