Converting OS X .plist to XML

The failure of my MacBook Pro seems that it didn’t cost much in terms on data loss, actually all the really crucial data were there. There are though some bits here and there that need to be moved over to my Linux box. One of the most important for everyday use the was the feed list for my RSS Aggregator. I was using the elegant, nice and simple NewsFire which uses binary plists (XML-like) for storing the feed list and I was never bothered exporting the feed list to OPML. The plist command in OS X can convert the .plist file into a .xml file, however I had no other Mac to do this operation or copy over the plist and export in OPML from there. Therefore I had to find a workaround for getting the plist working on RSSOwl or Liferea. Found the Perl plutil for Linux which converted the binary plist to standard XML. RSSOwl managed then to import the XML with all the blogs’ feeds as on NewsFire.

Regarding liferea, it wouldn’t import the XML file as it wasn’t proper OPML. But since RSSOwl imported the XMl file, I exported the feed list as OPML, which would be imported into Liferea or any other application.

Advertisement

HFS+ on Linux

My MacBook Pro seems to have died. Its hard disk holds lots of data that have not been backed up anywhere else. The important data have been mostly backed up but latest changes haven’t been backed up. Other than that, I had to check the hard disk for bad sectors etc and since I don’t have another Mac to test it on I had to do it on my Fedora box. My system (Fedora 15) would automatically detect and mount the HFS+ hard disk on read only mode. It was also missing the fsck tool for HFS+ partitions. Getting the fsck tool for HFS+ required downloading and installing hfsplus-tools via yum.However, fsck.hfs will not allow you, except if you use the –force option, to scan the partition if it has journalling on, which is the case with HFS+ partitions by default.

But how would I turn off journalling when I don’t have a Mac system to attach my disk on? After a couple of minutes I came across a post on the Ubuntu forum which linked to this blog. The author has a C code that turns journalling off. A fixed version of this code can be found here. The compiled code results in an executable which gets as its only argument the partition that you want to get journalling off.

# gcc journalling_off.c -o journalling_off
# ./journalling_off /dev/sdg2

Next step was to perform the fsck check on the target disk

# fsck.hfs /dev/sdg2
** /dev/sdg2
** Checking HFS Plus volume.
** Checking Extents Overflow file.
** Checking Catalog file.
** Checking multi-linked files.
** Checking Catalog hierarchy.
** Checking Extended Attributes file.
** Checking volume bitmap.
** Checking volume information.
** The volume OS X appears to be OK.

Disk looks OK. Next step is to mount it with read-write permissions:

# mount -t hfplus -o rw,user /dev/sdg2 /mnt/osx

Next issue encountered was the different UIDs between my account on the OS X system and that on the Linux system. Therefore, next step was to change the UIDs under the whole user directory on the OS X disk so I could access without problem with write permissions from my Linux box:

# find panoskrt/ -uid 501 -exec chown panoskrt {} \;

Altitude diving depth and NITROX calculations

For several reasons I’ve found myself diving (with SCUBA) in inland lakes on altitude. Due to the lower atmospheric pressure, altitude diving requires different depth calculations. There are diving tables and computer software that help divers to plan a safe dive. Although, I’ve though of making my “quick and dirty” script for calculating the theoretical ocean depth of an altitude dive, the depth of the safety stop at altitude as well as the best NITROX mix at the given altitude for PO2 of 1.2, 1.4 and 1.6.

The scripts accepts only two parameters: the altitude and the depth. For instance, for a dive at 1350m altitude and 30m depth:

 ./calcDepth.sh 1350 30
=======================================================
Every individual diver is responsible for planning 
and conducting dives using SCUBA equipment up to the 
trained and certified qualification he or she holds.

The creator of this program does not have any 
responsibility for symptoms of Decompressions Sickness 
when the suggested values of this program are used 
for conducting a dive.
========================================================


Altitude: 	1350 m
Depth: 		30 mfw
Pressure: 	.86 atm
TOD: 		33.65900 msw
Safety Stop: 	4.45 mfw

Best NITROX mix with 1.2 PO2: 27.00
Best NITROX mix with 1.4 PO2: 32.00
Best NITROX mix with 1.6 PO2: 36.00

###########################################################################
# Copyright (C) 2011  Panagiotis Kritikakos <panoskrt@gmail.com>           #
#                                                                          #
#    This program is free software: you can redistribute it and/or modify  #
#    it under the terms of the GNU General Public License as published by  #
#    the Free Software Foundation, either version 3 of the License, or     #
#    (at your option) any later version.                                   #
#                                                                          #
#    This program is distributed in the hope that it will be useful,       #
#    but WITHOUT ANY WARRANTY; without even the implied warranty of        #
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the         #
#    GNU General Public License for more details.                          #
#                                                                          #
#    You should have received a copy of the GNU General Public License     #
#    along with this program.  If not, see <http://www.gnu.org/licenses/>. #
############################################################################
#!/bin/bash
MSW=10.0584
MFW=10.3632
SSDSW=5
ALT=$1
Da=$2

disclaimer()
{
echo "
=======================================================
Every individual diver is responsible for planning
and conducting dives using SCUBA equipment up to the
trained and certified qualification he or she holds.

The creator of this program does not have any
responsibility for symptoms of Decompressions Sickness
when the suggested values of this program are used
for conducting a dive.
========================================================
"
}

if [ -z $ALT ] || [ -z $Da ]; then
  echo " Specify altitude and depth: ./calcDepth.sh 1350 24"
  exit 1
else
  clear
  Pa=`echo "100-(0.01*$ALT)" | bc`

  TOD=`echo "scale=5; ($Da*(1/$Pa)*($MSW/$MFW))*100" | bc`
  SSDA=`echo "scale=2; ($SSDSW*($Pa/1)*($MFW/$MSW))/100" | bc`

  PAatm=`echo "scale=2; $Pa/100" | bc`

  N1=`echo "scale=2; 100*(1.2/(($TOD/10)+1))" | bc`
  N2=`echo "scale=2; 100*(1.4/(($TOD/10)+1))" | bc`
  N3=`echo "scale=2; 100*(1.6/(($TOD/10)+1))" | bc`

  echo
  disclaimer
  echo

  printf "Altitude: \t$ALT m\n"
  printf "Depth: \t\t$Da mfw\n"
  printf "Pressure: \t$PAatm atm\n"
  printf "TOD: \t\t$TOD msw\n"
  printf "Safety Stop: \t$SSDA mfw\n\n"
  printf "Best NITROX mix with 1.2 PO2: $N1\n"
  printf "Best NITROX mix with 1.4 PO2: $N2\n"
  printf "Best NITROX mix with 1.6 PO2: $N3\n\n"
  exit 0
fi

Speedup and efficiency shell calculator

The following script can be used to calculate the speedup and efficiency of a parallel code when compared to its serial version. Pretty much straight forward process. This script can be used either individually or as part of another script to automate the process of generating the required results. It accepts three arguments: 1) serial execution time 2) parallel execution time 3) number of processors.

./SEcalc.sh <serial> <parallel> <procs>

Script:

############################################################################
# Copyright (C) 2011  Panagiotis Kritikakos <panoskrt@gmail.com>           #
#                                                                          #
#    This program is free software: you can redistribute it and/or modify  #
#    it under the terms of the GNU General Public License as published by  #
#    the Free Software Foundation, either version 3 of the License, or     #
#    (at your option) any later version.                                   #
#                                                                          #
#    This program is distributed in the hope that it will be useful,       #
#    but WITHOUT ANY WARRANTY; without even the implied warranty of        #
#    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the         #
#    GNU General Public License for more details.                          #
#                                                                          #
#    You should have received a copy of the GNU General Public License     #
#    along with this program.  If not, see <http://www.gnu.org/licenses/>. #
############################################################################

#!/bin/bash

if [ "$#" -eq "3" ]; then
   runtime1proc=$1
   runtimeNproc=$2
   totalprocs=$3
   speedup=`echo "${runtime1proc}/${runtimeNproc}" | bc -l`;
   efficiency=`echo "${speedup}/${totalprocs}" | bc -l`;

   printf "\n Total processors: ${totalprocs}\n\n";
   printf " Runtime for serial code: ${runtime1proc}\n Runtime for parallel code: \
   ${runtimeNproc}\n\n";
   printf " Speedup: ${speedup}\n Efficiency: ${efficiency}\n\n";
else
   printf "\n Usage: SEcalc.sh   \n\n";
   printf " SEcalc.sh 0.350 0.494 2\n\n";
fi

GPU programming in a glance

On my MacBook I have a nVIDIA GeForce 8600M GT which is CUDA enabled, something I never bothered checking until very recently. nVIDIA provides online the required driver, SDK and additional “CUDA Developer”, as they call it, resources with lots of sample files to test the hardware of your system as well as actual code, including some parallel samples.

The CUDA toolkit seem to provide all you need to start with:

  • C/C++ compiler
  • Visual Profiler
  • GPU-accelerated BLAS library
  • GPU-accelerated FFT library
  • GPU-accelerated Sparse Matrix library
  • GPU-accelerated RNG library
  • Additional tools and documentation

It does also include OpenCL samples to play about. However, the OpenCL driver will need to be installed at first place. There’s a pre-release version and in order to download it you’d need to register yourself with nVIDIA. They have also published a book, “CUDA by example”, which is not for free apart from some fragments. Nevertheless, the sample codes of the book are free to download.

AMD / ATI have also their answer to CUDA, “ATI Stream“. From what I got it seems to support only OpenCL. I don’t have a Stream-supported ATI card at the moment so I couldn’t try that one.

To close this, there’s an interesting presentation that covers basics of GPUs and how to program them (CUDA based): Programming and optimization of applications for multiple GPU

Nagios remote resources monitoring using SSH (check_by_ssh)

Recently I have been setting up Nagios as the increasing number of machines and services per machines can make it difficult to monitor and tell what’s wrong and what’s not or when you should pay more attention at a system or a service.

Following Nagios documentation is pretty much straight forward to set up the monitoring server. Start monitoring exposed services such as SSH, HTTP, FTP, MySQL, PostgreSQL is also straight forward. Plugins such as check_tcp and check_udp provide also an easy way to see if a service is actually running. For instance, for a CVS pserver, you can use the check_tcp script to check if port 2401 is open or not. Not the best way you should actually test a service but works OK when you want to do a check.

The systems I had to get monitored regarding their local resources were of three types: LCFG Linux, self-managed Linux and self-managed Solaris. This differentiation brings a bit of complexity on its own as they need different ways of sorting monitoring with SSH but still of course using the same principles and techniques. The easiest one is the LCFG ones as a configuration header was created and “included” in every system that needed to be monitored. That looks something like the following:

/** Configuration for monitored remote hosts.
*   This header will allow Nagios server to monitor
*   services on remote system that use this header by
*   running check_by_ssh.
**/

/** Nagios will fail to run remote command if an SSH banner is displayed **/
!openssh.sshdopts       mREMOVE(Banner)

!tcpwrappers.allow_sshd mCONCATQ(" <Nagios_server_hostname_goes_here")

!auth.extrapasswd       mADD(nagios)
auth.pwent_nagios       nagios:*:007:007:Nagios:/home/nagios:/bin/bash
!auth.extragroup        mADD(nagios)
auth.grpent_nagios      nagios:*:007:apache

/** You may add "nagios" user to the user access list of the machine depending the
authentication method **/

/** Public key authentication for 'nagios' user **/
!file.files             mADD(nagiosKey)
file.file_nagiosKey     /localdisk/home/nagios/.ssh/authorized_keys
file.type_nagiosKey     literal
file.mode_nagiosKey     0644
!file.tmpl_nagiosKey    mCONCATQ("<hey_goes_here>")

!profile.packages       mEXTRA(+nagios-plugins-1.4.13-4.el5)

/** List of plugins to be installed remotely **/
!profile.packages       mEXTRA(+nagios-plugins-disk-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-load-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-procs-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-swap-1.4.13-4.el5)
!profile.packages       mEXTRA(+nagios-plugins-users-1.4.13-4.el5)

The self managed systems would make use of either a local or network “nagios” account using public key authentication and each remote system would need to have installed manually its own set of required plugins. A single compile of the plugins in the NFS home directory of the network “nagios” account might not work when you have multiple different *NIX operating systems.

I have configured the Nagios config files for remote services based on this *very* helpful and clear guide http://wiki.nagios.org/index.php/Howtos:checkbyssh_RedHat

The key point with the remote commands is to define the right commands for Nagios, pointing at the right location of the plugins remotely and passing the correct arguments. So, five remote services have been defined, as can be seen from the RPMs above: check_disk, check_load, check_procs, check_swap, check_users.

To call each remote plugin, new command definitions need to be added in /etc/nagios/commands.cfg

define command{
        command_name    check_remote_disk
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
        -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_disk \
        -w $ARG2$ -c $ARG3$ -p $ARG4$'
        }

define command{
        command_name    check_remote_users
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
        -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_users \
        -w $ARG2$ -c $ARG3$'
        }

define command{
        command_name    check_remote_load
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
       -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_load \
       -w $ARG2$ -c $ARG3$'
        }

define command{
        command_name    check_remote_procs
        command_line    $USER1$/check_by_ssh -p $ARG1$ 
       -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_procs \
       -w $ARG2$ -c $ARG3$ -s $ARG4$'
        }

define command{
        command_name    check_remote_swap
        command_line    $USER1$/check_by_ssh -p $ARG1$ \
        -H $HOSTADDRESS$ -C '/usr/lib/nagios/plugins/check_swap \
        -w $ARG2$ -c $ARG3$'
        }

Depending on the setup you might need to change the location of the plugins or use more options such as desirable user to login, location of keys, IPv4 or IPv6 connection, use of SSH1 or SSH2 etc… Once having defined the commands, they can be used to define services within host configuration files.

The main reason I wanted to avoid using NRPE was the fact that one more services should be exposed, even internally, from system that you want to expose only what is necessary. NRPE would be useful if Windows servers should be monitored for their resources.

Restarting VMware Fusion functionality

I just tried to boot my FreeBSD-7.2 VM on VMware Fusion and got an error message that it will not have any network availability as “network bridge on device /dev/vmnet0 is not running“. I checked and indeed, there was no vmnet* interface up. A look in /Library/Application Support/VMware Fusion reveals the file boot.sh, which happens to be the only available script.

$ ls -lrt
total 16944
drwxr-xr-x   3 root  wheel      102 12 Jul  2007 licenses
drwxr-xr-x   6 root  wheel      204 12 Jul  2007 kexts
-rw-r--r--   1 root  wheel      373 30 Oct  2007 license.fusion.site.6.0.200610
drwxr-xr-x   3 root  wheel      102  2 Nov  2007 vmx
-rwxr-xr-x   1 root  wheel  3590412 19 Apr  2008 vmware-vdiskmanager
-rwsr-xr-x   1 root  wheel   366264 19 Apr  2008 vmware-authd
-rwsr-xr-x   1 root  wheel  3194884 19 Apr  2008 vmware-rawdiskCreator
-rwxr-xr-x   1 root  wheel   200968 19 Apr  2008 vmware-ntfs
-rwxr-xr-x   1 root  wheel   152533 19 Apr  2008 vmware-config-net.pl
-rwxr-xr-x   1 root  wheel    74916 19 Apr  2008 vmnet-sniffer
-rwxr-xr-x   1 root  wheel    61428 19 Apr  2008 vmnet-netifup
-rwxr-xr-x   1 root  wheel   501632 19 Apr  2008 vmnet-natd
-r--r--r--   1 root  wheel     1241 19 Apr  2008 vmnet-nat.conf
-r--r--r--   1 root  wheel      742 19 Apr  2008 vmnet-dhcpd.conf
-rwxr-xr-x   1 root  wheel   333464 19 Apr  2008 vmnet-dhcpd
-rwxr-xr-x   1 root  wheel   120612 19 Apr  2008 vmnet-bridge
-rwxr-xr-x   1 root  wheel     7932 19 Apr  2008 vm-support.tool
-rwxr-xr-x   1 root  wheel    17186 19 Apr  2008 boot.sh
drwxr-xr-x   7 root  wheel      238 15 May  2008 tools-upgraders
drwxr-xr-x   6 root  wheel      204 15 May  2008 messages
drwxr-xr-x  13 root  wheel      442 15 May  2008 isoimages
drwxr-xr-x  17 root  wheel      578 15 May  2008 vnckeymap
drwxr-xr-x   5 root  wheel      170 15 May  2008 vmnet8
drwxr-xr-x   3 root  wheel      102 15 May  2008 vmnet1
-rw-r--r--   1 root  wheel     5612 15 May  2008 locations
-rw-r--r--   1 root  wheel       81 15 May  2008 config
drwxr-xr-x   3 root  wheel      102 15 May  2008 Uninstall VMware Fusion.app

Its execution pretty simply:

$ ./boot.sh
Usage: ./boot.sh {--start|--stop|--restart}
$ sudo boot.sh --start
VMware Fusion 87978: Starting VMware Fusion:
kextload: extension /Library/Application Support/VMware Fusion/kexts/vmmon.kext is already loaded
kextload: /Library/Application Support/VMware Fusion/kexts/vmci.kext loaded successfully
kextload: /Library/Application Support/VMware Fusion/kexts/vmioplug.kext loaded successfully
kextload: extension /Library/Application Support/VMware Fusion/kexts/vmnet.kext is already loaded
Internet Software Consortium DHCP Server 2.0
Copyright 1995, 1996, 1997, 1998, 1999 The Internet Software Consortium.
All rights reserved.

Please contribute if you find this software useful.
For info, please visit http://www.isc.org/dhcp-contrib.html

Configured subnet: 172.16.17.0
Setting vmnet-dhcp IP address: 172.16.17.254
Opened: ??
Recving on     VNet/vmnet8/172.16.17.0
Sending on     VNet/vmnet8/172.16.17.0
Internet Software Consortium DHCP Server 2.0
Copyright 1995, 1996, 1997, 1998, 1999 The Internet Software Consortium.
All rights reserved.

Please contribute if you find this software useful.
For info, please visit http://www.isc.org/dhcp-contrib.html

Configured subnet: 172.16.123.0
Setting vmnet-dhcp IP address: 172.16.123.254
Opened: ??
Recving on     VNet/vmnet1/172.16.123.0
Sending on     VNet/vmnet1/172.16.123.0

Once finished, the VM could start again without complains and gets its IP via LAN’s DHCP.

Package managers for OS X

Installing common *nix applications from source on OS X can be a real pain in the neck. It may even defeat Slackware in some occasions :p Tools provided by Macports and Fink make life much easier and productive. On the following example, Fink is used to install gnuplot by using the Debian-like apt-get:

panoskrt$ uname -rs
Darwin 9.6.0
panoskrt$ sudo apt-get install gnuplot
Reading Package Lists... Done
Building Dependency Tree... Done
The following extra packages will be installed:
  aquaterm aquaterm-shlibs gd2-shlibs libjpeg-bin pdflib-shlibs readline-shlibs texinfo
The following NEW packages will be installed:
  aquaterm aquaterm-shlibs gd2-shlibs gnuplot libjpeg-bin pdflib-shlibs readline-shlibs texinfo
0 packages upgraded, 8 newly installed, 0 to remove and 0  not upgraded.
Need to get 5461kB of archives. After unpacking 13.6MB will be used.
Do you want to continue? [Y/n] y
Get:1 http://bindist.finkmirrors.net 10.5/release/main aquaterm-shlibs 1.0.0-1003 [31.6kB]
Get:2 http://bindist.finkmirrors.net 10.5/release/main aquaterm 1.0.0-1003 [88.2kB]
Get:3 http://bindist.finkmirrors.net 10.5/release/main libjpeg-bin 6b-17 [130kB]
Get:4 http://bindist.finkmirrors.net 10.5/release/main gd2-shlibs 2.0.33-3 [130kB]
Get:5 http://bindist.finkmirrors.net 10.5/release/main texinfo 4.8-1002 [1013kB]
Get:6 http://bindist.finkmirrors.net 10.5/release/main readline-shlibs 4.3-1028 [108kB]
Get:7 http://bindist.finkmirrors.net 10.5/release/main pdflib-shlibs 5.0.1-2 [1827kB]
Get:8 http://bindist.finkmirrors.net 10.5/release/main gnuplot 4.0.0-1005 [2133kB]
Fetched 5461kB in 43s (126kB/s)
Selecting previously deselected package aquaterm-shlibs.
(Reading database ... 5617 files and directories currently installed.)
Unpacking aquaterm-shlibs (from .../aquaterm-shlibs_1.0.0-1003_darwin-i386.deb) ...
Selecting previously deselected package aquaterm.
Unpacking aquaterm (from .../aquaterm_1.0.0-1003_darwin-i386.deb) ...
Selecting previously deselected package libjpeg-bin.
Unpacking libjpeg-bin (from .../libjpeg-bin_6b-17_darwin-i386.deb) ...
Selecting previously deselected package gd2-shlibs.
Unpacking gd2-shlibs (from .../gd2-shlibs_2.0.33-3_darwin-i386.deb) ...
Selecting previously deselected package texinfo.
Unpacking texinfo (from .../texinfo_4.8-1002_darwin-i386.deb) ...
Selecting previously deselected package readline-shlibs.
Unpacking readline-shlibs (from .../readline-shlibs_4.3-1028_darwin-i386.deb) ...
Selecting previously deselected package pdflib-shlibs.
Unpacking pdflib-shlibs (from .../pdflib-shlibs_5.0.1-2_darwin-i386.deb) ...
Selecting previously deselected package gnuplot.
Unpacking gnuplot (from .../gnuplot_4.0.0-1005_darwin-i386.deb) ...
Setting up aquaterm-shlibs (1.0.0-1003) ...
Setting up aquaterm (1.0.0-1003) ...
Setting up libjpeg-bin (6b-17) ...

Setting up gd2-shlibs (2.0.33-3) ...
Setting up texinfo (4.8-1002) ...
* Texinfo: (texinfo).           The GNU documentation format.
install-info(/sw/share/info/texinfo): creating new section `Texinfo documentation system'
* info standalone: (info-stnd).            Read Info documents without Emacs.
* Info: (info). How to use the documentation browsing system.

Setting up readline-shlibs (4.3-1028) ...
Setting up pdflib-shlibs (5.0.1-2) ...
Setting up gnuplot (4.0.0-1005) ...
* GNUPLOT: (gnuplot). An Interactive Plotting Program
install-info(/sw/share/info/gnuplot.info): creating new section `Math'

panoskrt$ which gnuplot;
/sw/bin/gnuplot

Fink: http://www.finkproject.org
Macports: http://www.macports.org/