Just in from Slashdot, The final release of Ubuntu 15.04 is now available. A modest set of improvements are rolling out with this spring’s Ubuntu. While this means the OS can’t rival the heavy changelogs of releases past, the adage “don’t fix what isn’t broken” is clearly one 15.04 plays to.

The headline change is systemd being featured first time in a stable Ubuntu release, which replaces the inhouse UpStart init system. The Unity desktop version 7.3 receives a handful of small refinements, most of which aim to either fix bugs or correct earlier missteps (for example, application menus can now be set to be always visible).

The Linux version is 3.19.3 further patched by Canonical. As usual, the distro comes with fresh versions of various familiar applications.

Join in the discussion at Slashdot here


MySql and MariaDB are probably the most popular databases out there for linux based development, however the stock MySql installation in Centos 6 and 7 comes with caching disabled. This results in heavy MySql processor use for sites that start to take off.

Allocating even a small amount of memory for MySql caching will result in a big speedup of your web application and can decrease MySql processor usage by up to 20% for heavy sites.

The MySQL query cache is a very simple, straightforward query-level cache. This means it is caching the results of a specific query, not operating at the table or database level. It is completely separate from the key buffer, InnoDB buffer pool, or other MySQL memory structures

If you want to get optimized and speedy response from your MySQL server then you need to add following two configurations directive to your MySQL server:


The amount of memory (SIZE) allocated for caching query results.
The default value is 0, which disables the query cache.


Set the query cache type. Possible options are as follows:
0 : Don't cache results in or retrieve results from the query cache.
1 : Cache all query results except for those that begin with SELECT S_NO_CACHE.
2 : Cache results only for queries that begin with SELECT SQL_CACHE

To enable Query Caching

Open your mysql config file

Then add the following block, sample values are:

In above example the maximum size of individual query results that can be cached set to 256k using query_cache_limit system variable. Memory size in Kb. Alternate values for larger databases would be

Enable Thread Caching

You should also enable thread caching at the same time if its not enabled. thread_cache_size is defined as how many threads the server should cache for reuse.

When a client disconnects, the client’s threads are put in the cache if there are fewer than thread_cache_size threads there. Requests for threads are satisfied by reusing threads taken from the cache if possible, and only when the cache is empty is a new thread created. This variable can be increased to improve performance if you have a lot of new connections.

Open your mysql config file

Then add the following line

To fine tune this value then you need to log into your MySql database and look at the following values:

The take a look at the following values. If threads_created / Connections is over 0.01, then increase thread_cache_size. At the very least, thread_cache_size should be greater than Max_used_connections.

InnoDB Buffer & Query Cache

Many references on Internet will tell you that query cache is useless if InnoDB is being used. If you are using InnoDB only and have limited RAM, then the InnoDB buffer pool without a doubt should get first priority.

If you have RAM to spare then it is highly recommended to use query_cache especially for WordPress sites. Even for big WordPress sites, a large percentage of SELECT queries will be much higher than INSERT or UPDATE’s.

We are pleased to announce support for monitoring CGminer in the latest version of LoadAvg! Now you can use LoadAvg as a full fledged monitoring framework for your linux based mining rigs.


What is LoadAvg ?

LoadAvg is a powerful way to manage load, memory, and resource usage on linux servers, cloud computers and virtual machines – and its open source so no price attached. With the addition of CGminer support, you can now use it as a monitoring tool.

We know it has a long way to go compared to other mining monitors, but when coupled with its large array of monitoring modules and its low resource usage, it makes for a great monitor that you can run on anything from a Raspberry Pi and up.


LoadAvg only requires PHP and a web based front end, like Nginx, Lightppd or Apache, and works online as well on mobile devices.

LoadAvg is still coming out of beta so we are adding features frequently. Since this is our first release with mining support, we know we have a long way to go there as well so look forward to your feedback!

How to get up and running

LoadAvg requires PHP and a web based front end, like Nginx, Lightppd or Apache to run, and works online as well on mobile devices.

Find out how to download and configure LoadAvg here


Setting up Miner Monitoring

Once you have installed LoadAvg simply head to the module setting and enable the Mining module. Then add the Server IP and Port that CGminer is running on and you are good to go.

There are two values that you can set for montoring, Overload 1 and Overload 2, set both of these to -1 to disable them.

Overload 1 is the Low Hash – anything that hashes below this value will be flagged.

Overload 2 is the High Hash – anything that hashes above this value will be flagged.

Settings For CGminer

To get LoadAvg to work with CGminer you need to enable the CGminer api and listen mode. This is how we did it here (we are running on ckpool).

sudo cgminer -o stratum+tcp://solo.ckpool.org:3333 -u bitcoinaddress -p x –api-allow W:0/0 –api-listen

We have only tested it with CGminer so far, and only record basic information – will start to add more features in the coming releases.

Since this is our first release with mining support, we know we have a long way to go so look forward to your feedback!

We are pleased to announce the release of LoadAvg 2.2. Version 2.2 builds on the core foundation delivered in the 2.0 branch and brings many bug fixes, new features and new plugins to extend its functionality.

Version 2.2 cleans up and extends the code in many of the existing charting modules, comes with two new plugins for Alerts management and Process monitoring, and a framework for server side and client side timezone monitoring. It also introduces initial mobile support, and is touch enabled on phones and tablets.

By popular request by many of our developers, It also has a bitcoin mining monitoring module, which for us is also a step into monitoring the IoT and the first step outside of core server monitoring.


We are very pleased with the result of the work that has gone into 2.2 and are starting work on 2.3 immediately which will focus mainly on streamlining and stabilizing the code in 2.2 as well as building out the Alerts and Process plugins.

You can download version 2.2 here


And discuss in our forums here


Thank you for your support,

The LoadAvg Team

We are pleased to announce the release of LoadAvg 2.1. Version 2.1 builds on the core foundation delivered in 2.0 but has been significantly re-coded with a true OOP architecture to allow for scaleability and portability.

Version 2.1 introduces new modules such as Uptime, Swap and Processor monitoring as well as introduces plugins which can further extend the core framework for process monitoring, etc.


It also has new UI updates that allow for drag and drop of modules in the charts, and stores module locations and expand/collapse status using cookies.

We have tested this release on NGinx and Lightppd and are also beta-testing support for CollectD, with full support to be released in version 2.2.

You can download version 2.1 here


And discuss in our forums here



The joy of profiling! Profiling is a powerful way to fine tune your PHP applications before releases. It helps you to pinpoint loose code and track down computationally expensive loops that can be easily refactored to speed up your application.

Since PHP has to load and run your entire application in one go, as opposed to storing parts of your application in memory for re-use, profiling is key to speedy performance and ensuring that your application is not a resource hog.

Doing some basic profiling for the upcoming LoadAvg 2.1 release we were able to track down 1 processor intensive inner loop that was not necessary, and refactored 2 core inner functions that were being used for dataset processing and were being called thousands of times. After refactoring we saw a speed increase of up to 5x in application speed.

1. Install xDebug and KCacheGrind

xDebug is easy to install and to configure, but there are a few tricks needed to ensure that its running properly. We will assume that you already have your LAMP stack installed and PHP is up and running. The next step will be to install the PHP developer tools, which includes PECL :

Then use PECL to install Xdebug :

It is important to know the xDebug on its own only generates debugging information, and so you will need another tool to make sense of this information. Kcachegrind is a powerful desktop tool and is easy to install, we will be using this here :

*alternatively you can install webgrind – a web based front end if you dont have direct access to the server and need realtime access to the data, however note that you can copy the xDebug data to a local machine for analysis, and debugging and profiling is really system intensive so shouldn’t be done on production systems.

2. Configure PHP to use xDebug

Next step is to configure PHP to use xDebug. This means edit your php.ini settings file and add support for the xDebug extension. Since we are doing browser based profiling, we will also need to tell xDebug where to store its files so we can access them.

We store our files in /xdebug but you are free to store them wherever you wish – as long as its not in /tmp and is writeable. Best place is really your home directory.

Note that /tmp is the default for Xdebug, however it doesn’t work for browser profiling since due to security restrictions apache wont write to /tmp without making a mess of things.

First make your Xdebug storage directory

Now open up your php settings file with your editor (we use vim here) and add the settings needed to integrate Xdebug with php.

This is the code block you will need to add. The best place to add this is at the bottom of the php.ini file in its own section right before the ;end. Replace “/xdebug/” with the location of your xdebug directory created above.

Now restart apache / php and if you get no errors then you should be up and running

NOTE: At this point everything should work perfectly! However SELiniux can make using Xdebug a pain as it restricts where apache can write, where php can write and where scripts can run. If you are on a development bops then its best to turn it off and reboot before moving ahead.

To do so, edit /etc/sysconfig/selinux and change grin permissive to disabled and restart, as illustrated here.

3. Testing at the command line

You will need a php file to test on, easiest way to to quickly create a quick info.php file with the following contents.

Now lets do a quick command line test, by calling php with xdebug set to on

You should see a cachegrind.out.xxxxx file in your xdebug folder if all went well.

*NOTE you should regularly empty your /xdebug folder to clear out old profiling data

4. Configure your browser to generate profile data

Now that you have xDebug installed and integrated with PHP, the next step is to configure your browser to generate profile data for you on demand. There are plugins available for all major browsers here.

The easiest Xdebug
This extension for Firefox was built to make debugging with an IDE easier.

Xdebug Helper for Chrome
This extension for Chrome will help you to enable/disable debugging and profiling with a single click.

Xdebug Toggler for Safari
This extension for Safari allows you to auto start Xdebug debugging from within Safari.

Xdebug launcher for Opera
This extension for Opera allows you to start an Xdebug session from Opera.

We will cover ‘The easiest Xdebug’ for Firefox since Firefox is the default browser, however the functionality is pretty much the same across the other browsers listed above.

Once installed the plugin allows you easily to turn profiling on and off via the browser. You turn profiling on by clicking on the small flowchart icon in your browser, it should light up and animate. On chrome, you get a drop down menu where you can enable profiling alone.

When profiling is on, every page load of script on your server will generate a cachegrind.out.xxxxx file in your xdebug folder. When its off, no profiling data is created.

If no data is showing up, see my note above about SELinux!

5. Viewing Xdebug profile data

Now comes the fun part. With kcachegrind you can easily load up your profile data and dig into what makes your application tick, and how to optimize it.

To start, fire up kcachegrind via the Applications menu (under Applications->KCacheGrind) or from the command line. Once kcachegrind is running, simply open the profile data you wish to inspect from your debug folder.


You can then start to dig in by looking at the Incl, Self and Called columns, click them to sort accordingly. Sorting by Self lets you see the computationally expensive functions and sorting by Called shows you functions that are called frequently in your app and that should be optimized.

This tutorial is about getting up and running, we can do another one on using KCacheGrind itself later on. In the mean time, here are some great links:

Find out more about xDebug and profiling here


Find out more about KCacheGrind and profiling here


And some good getting started tips are found here


Anthony Ferrara, a developer advocate at Google, has published a blog post with some statistics showing the sorry state of affairs for website security involving PHP. After defining a list of secure and supported versions of PHP, he used data from W3Techs to find a rough comparison between the number of secure installs and the number of insecure or outdated installs.

After doing some analysis, Ferrara sets the upper bound on secure installs at 21.71%. He adds, “These numbers are optimistic. That’s because we’re counting all version numbers that are maintained by a distribution as secure, even though not all installs of that version number are going to be from a distribution.

Just because 5.3.3 is maintained by CentOS and Debian doesn’t mean that every install of 5.3.3 is maintained. There will be a small percentage of installs that are from-source. Therefore, the real ‘secure’ number is going to be less than quoted.” Ferrara was inspired to dig into the real world stats after another recent discussion of responsible developer practices.

Read the full article here:


This tutorial covers how to enable Apache server-status (mod_status) with extended status data on Centos servers. This data is useful for Apache profiling and understanding whats happening on your web-server.

It allows you to view a detailed status page for your web server which is useful for watching your web server’s performance during load testing or to gather activity data.

You can see what server-status looks like here:


and extended status looks like this


Make sure mod_status is installed

First you need to make sire that mod_Status is installed. This is the apache module that powers server-status. Apache is usually configured with mod_status already installed, but not accessible. To see if its installed first check your apache config file at

and search for

If there is a ‘#’ sign in front of it then its been disabled, you will need to remove this for it to work.

Note: It is always advisable to make a copy of the httpd.conf file before editing it in case things go wrong!

Opening up up server-status access

Once you have made sure that the mod_status module is active, you then need to enable the server-status link so you can access it. Search for the following code and edit it as it is below.

For security reasons we suggest you only give your ‘localhost’ access to this data as we have done.

Setting up extended-status

In order to now enable access to ‘extended-status’ data you will need to find and activate ExtendedStatus in the same config file as follows

You Will then need to restart Apache, a hard restart is best:

However you can also use the less intrusive way of restarting apache


To test that server-status is running you can use Lynx, a command line based ASCII browser which allows you to test if things are working locally on your server.

You can easily install lynx via

Then hit the server-status page as follows

lynx http://localhost/server-status?refresh=5

To see the extended server data hit

lynx http://localhost/server-status/?auto


If you get any errors, then its time to troubleshoot. A ‘not found’ error means that mod_status isn’t properly enabled. A ‘forbidden’ error means that the Location configuration for /server-status wont let you access the page. A ‘connection refused’ most likely means that apache isn’t listening on port 80 so may not be running.

For Systems with Virtual Hosts

Note that for systems with multiple virtual hosts its a bit more tricky. We will update this guide shortly with instructions for these systems.

What does it all mean?

Most often the mod_status output is used by other tools to chart the server’s activity over time. Viewed directly, the status page is handy for a quick overview of what your server is doing at a given moment. Some of the displayed data can indicate problems that merit investigation.

Examples of items to watch for on the status page include:

  • A high CPU usage for apache could indicate a problem with an application being run through a module like mod_php.
  • While it is normal to see several keep-alives being handled by apache’s workers, if they constitute a vast majority of the worker statuses you see then your web server might be keeping old connections alive too long. You may want to look into reducing the amount of time connections are kept alive by apache via the KeepAliveTimeout directive.
  • If you see very few inactive workers (represented by . characters) you may want to increase the MaxClients value for your apache server. Making sure you have idle workers ready to handle new requests can improve the web server’s responsiveness (assuming you have enough memory on the server available to handle the extra connections).


Apache’s status module can help you to highlight problems that would be otherwise difficult to isolate using standard system tools. Even if you’re only enabling mod_status for a monitoring tool, knowing how to access and read the data can help you be a more effective web server administrator.

This article discusses iostat and how it can identify I/O-subsystem and CPU bottlenecks. iostat works by sampling the kernel’s address space and extracting data from various counters that are updated every clock tick (1 clock tick = 10 milliseconds [ms]). The results — covering CPU, and I/O subsystem activity — are reported as per-second rates or as absolute values for the specified interval. The iostat tool in a part of the sysstat package, to install it simply type:

sudo yum install sysstat


sudo apt-get install sysstat


Normally, iostat is issued with both an interval and a count specified, with the report sent to standard output or redirected to a file. To get up and running quickly you can try the following command which will output data in Kilobytes every 5 seconds

# iostat -k

Linux 2.6.el6.x86_64 (jahshaka) 11/21/2014 _x86_64_ (2 CPU)

avg-cpu: %user %nice %system %iowait %steal %idle
3.73 0.00 0.49 0.47 0.06 95.25

Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn
vda 8.84 20.17 89.07 39055085 172488380

The command syntax appears below:

iostat [-c] [-d] [Drives] [Interval [Count]] [-k]

The -d flag causes iostat to provide only disk statistics for all drives. The -c flag causes iostat to provide only CPU statistics.

NOTE: The -c and -d options are mutually exclusive.

If you specify one or more drives, the output is limited to those drives. Multiple drives can be specified; separate them in the syntax with spaces.

You can specify a time in seconds for the interval between records to be included in the reports. The initial record contains statistics for the time since system boot. Succeeding records contain data for the preceding interval. If no interval is specified, a single record is generated.

If you specify an interval, the count of the number of records to be included in the report can also be specified. If you specify an interval without a count, iostat will continue running until it is killed.

iostat -k 5

CPU statistics in the iostat output

The first report generated by the iostat command is the CPU Utilization Report. For multiprocessor systems, the CPU values are global averages among all processors.

iostat -c -k

The report has the following format:

Show the percentage of CPU utilization that occurred while executing at the user level (application). A UNIX process can execute in user or system mode. When in user mode, a process executes within its own code and does not require kernel resources.

Show the percentage of CPU utilization that occurred while executing at the user level with nice priority.

Show the percentage of CPU utilization that occurred while executing at the system level (kernel). This includes CPU resource consumed by kernel processes (kprocs) and others that need access to kernel resources. For example, the reading or writing of a file requires kernel resources to open the file, seek a specific location, and read or write data. A UNIX process accesses kernel resources by issuing system calls.

Show the percentage of time spent in involuntary wait by the virtual CPU or CPUs while the hypervisor was servicing another virtual processor.

Show the percentage of time that the CPU or CPUs were idle and the system did not have an outstanding disk I/O request. If there are no processes on the run queue, the system dispatches a special kernel process called wait. On most systems, the wait process ID (PID) is 516.

Show the percentage of time that the CPU or CPUs were idle during which the system had an outstanding disk I/O request. The iowait state is different from the idle state in that at least one process is waiting for local disk I/O requests to complete. Unless the process is using asynchronous I/O, an I/O request to disk causes the calling process to block (or sleep) until the request is completed. Once a process’s I/O request completes, it is placed on the run queue.


For multiprocessor feedback you can also run mpstat. The mpstat command display activities for each available processor, processor 0 being the first one. Global average activities among all processors are also reported. The mpstat command can be used both on SMP and UP machines, but in the latter, only global average activities will be printed.:

# mpstat -P ALL

Analyzing the data

Typically, the CPU is pacing (the system is CPU bound) if the sum of user and system time exceeds 90 percent of CPU resource on a single-user system or 80 percent on a multi-user system. This condition means that the CPU is the limiting factor in system performance.

The ratio of user to system mode is determined by workload and is more important when tuning an application than when evaluating performance.

A key factor when evaluating CPU performance is the size of the run queue (provided by the vmstat command). In general, as the run queue increases, users will notice degradation (an increase) in response time.

A high iowait percentage indicates the system has a memory shortage or an inefficient I/O subsystem configuration. Understanding the I/O bottleneck and improving the efficiency of the I/O subsystem require more data than iostat can provide. However, typical solutions might include:

  • limiting the number of active logical volumes and file systems placed on a particular physical disk (The idea is to balance file I/O evenly across all physical disk drives.)
  • spreading a logical volume across multiple physical disks (This is useful when a number of different files are being accessed.)
  • creating multiple JFS logs for a volume group and assigning them to specific file systems (This is beneficial for applications that create, delete, or modify a large number of files, particularly temporary files.)
  • backing up and restoring file systems to reduce fragmentation (Fragmentation causes the drive to seek excessively and can be a large portion of overall response time.)
  • adding additional drives and rebalancing the existing I/O subsystem

On systems running a primary application, high I/O wait percentage may be related to workload. In this case, there may be no way to overcome the problem. On systems with many processes, some will be running while others wait for I/O. In this case, the iowait can be small or zero because running processes “hide” wait time. Although iowait is low, a bottleneck may still limit application performance. To understand the I/O subsystem thoroughly, examine the statistics in the next section.

Disk statistics in the iostat output

The disk statistics portion of the iostat output provides a breakdown of I/O usage. This information is useful in determining whether a physical disk is limiting performance.

iostat -d -k

The Devices: column shows the names of the physical volumes. They can vary based on your system between disk, vda, sda or other values, followed by a number.

A drive is active during data transfer and command processing, such as seeking to a new location. The disk-use percentage is directly proportional to resource contention and inversely proportional to performance. As disk use increases, performance decreases and response time increases. In general, when a disk’s use exceeds 70 percent, processes are waiting longer than necessary for I/O to complete because most UNIX processes block (or sleep) while waiting for their I/O requests to complete.

Indicate the number of transfers per second that were issued to the device. A transfer is an I/O request to the device. Multiple logical requests can be combined into a single I/O request to the device. A transfer is of indeterminate size.

Indicate the amount of data read from the device expressed in a number of blocks per second. Blocks are equivalent to sectors with kernels 2.4 and later and therefore have a size of 512 bytes. With older kernels, a block is of indeterminate size.

Indicate the amount of data written to the device expressed in a number of blocks per second.

The total number of blocks read.

The total number of blocks written.

Additional modes

You can also displays statistics for specific device only with -p and the device name arguments. With -N (Uppercase) parameter you can view only LVM statistics.

Analyzing the data

Taken alone, there is no unacceptable value for any of the preceding fields because statistics are too closely related to application characteristics, system configuration, and types of physical disk drives and adapters. Therefore, when evaluating data, you must look for patterns and relationships. The most common relationship is between disk utilization and data transfer rate.

For example, if an application reads and writes sequentially, you should expect a high disk-transfer rate when you have a high disk-busy rate. (NOTE: Kb_read and Kb_wrtn can confirm an understanding of an application’s read and write behavior but they provide no information on the data access patterns.)

Generally you do not need to be concerned about a high disk-busy rate as long as the disk-transfer rate is also high. However, if you get a high disk-busy rate and a low data-transfer rate, you may have a fragmented logical volume, file system, or individual file.


The primary purpose of the iostat tool is to detect I/O bottlenecks by monitoring the disk utilization. iostat can also be used to identify CPU problems, assist in capacity planning, and provide insight into solving I/O problems. Armed with both vmstat and iostat, you can capture the data required to identify performance problems related to CPU, memory, and I/O subsystems.

We are pleased to announce the release of LoadAvg 2.0. Version 2.0 is a rewrite from the ground up and features a completely new framework that has been redesigned with speed and scaleability in mind.

Version 2.0 can easily be expanded through the addition of new modules, and currently includes support for monitoring Server Load, Memory, Disk Usage & Swap, Network Bandwidth (across multiple interfaces), Apache Usage, MySql thru-put and SSH access logs.


With this new framework in place, we will now start to focus on streamlining the core and adding new features and functionality, including:

1. Alerts and Alarms
2. Better server data
3. Security data
4. Additional monitoring modules


We are also working on the development of GridLoad (www.gridload.com), a web based SaaS centralised solution that runs on the LoadAvg Server so you can easily view and manage all your server servers from a single location. To facilitate this we are also working on a standalone, c++ logging daemon, loadavgd, to deliver a streamlined logger that eliminates the php/apache requirements loadavg currently has.

About LoadAvg

LoadAvg is free software that will change the way you manage your servers by helping you to easily manage performance and stay on top of your hardware. LoadAvg is easy to download and easy to install, and it features a built in update system to make sure you have the latest version.