<![CDATA[andrewklau ]]>https://www.andrewklau.com/Ghost 0.9Sat, 12 Jan 2019 01:09:07 GMT60<![CDATA[Where to buy bitcoin in Australia]]>This past week the Bitcoin drop hit the headlines quite a few times. China's ICO regulation announcement caused quite the stir and opening the opportunity for many spectators to jump in.

However purchasing bitcoin in Australia is not as simple as that. The past week I went through three of

]]>
https://www.andrewklau.com/purchasing-bitcoin-in-australia/af5c7d1b-68f5-49d3-b01b-16bddf7faa5cWed, 06 Sep 2017 00:41:57 GMTThis past week the Bitcoin drop hit the headlines quite a few times. China's ICO regulation announcement caused quite the stir and opening the opportunity for many spectators to jump in.

However purchasing bitcoin in Australia is not as simple as that. The past week I went through three of the most popular options. Jumped through the loops of identity verification and played the waiting game hoping the prices won't shoot up again while my bank deposit took place.

Independent Reserve

Independent Reserve is a popular exchange. They allow trading in multiple AUD pairings as well as USD and NZD.

The best thing about Independent Reserve is their verification and purchase time, I was able to verify and place an instant order with Poli in less than 1 hour. Unlike Cointree, your poli payments allow you to withdraw the Bitcoin you purchase immediately.

This was my biggest savior as it allowed me to withdraw my funds straight away into exchanges like Bitrex or Binance and get back to trading.

Their support is definitely amazing, I was able to increase my account limits within less then 1 hour.

If you want to buy in NOW then these guys are the best choice considering the speed of ID verification and payment. Kudos to them.

If you decide to use the Independent Reserve, please checkout with my referral code https://www.independentreserve.com?invite=WJPMJN

BTCMarkets

BTCMarkets is an exchange where you place buy and sell orders for the amount of bitcoin you want to buy. They currently have a number of pairings like Bitcoin, Ethereum, Litecoin and Monero.

They charge a 0.85% sliding commission which means the more you trade the lower the commission. Depositing funds is done through Poli and Bpay.

BTCMarkets seems to be the most popular with multiple trades happening every minute so you are not left there waiting for your buy order to be fulfilled (assuming you place a reasonable request).

I saw a large volume of orders ranging from $200 all the way up to $100k+ so it's definitely not a small time exchange.

Their support is however an absolute let down. Prepare to wait days for a unhelpful response. My poli deposit has taken almost a whole week and is still pending. There are lots of complaints on their Facebook page about slow payments too.

If you wish to get in quick BTCMarkets is not the option as their verification process will take up to 10 days as they opt to send you a letter in the mail for ID verification. Once you're in though, the trade volume is definitely looks good.

Cointree

Unlike the other two options, Cointree only allows you to buy at the "market" price. You must put faith in their business model where they promise to find you the best possible price.

They charge a 3% commission which can be quite high if you plan on purchasing a large amount.

Cointree's payment model is not very comforting as you will be required to place an order without knowing exactly how much you are getting in return.

Cointree has three payment methods:

  • Bank transfer payments (with a limit of $500?!) can take up to 2-3 days, so you must wait and pray that the price does not fluctuate too much.
  • Their Poli payment option is "instant" so you are able to purchase at their current rate, however your Bitcoin is locked in their account for 1-2 days until the funds clear.
  • Their over the counter deposit at NAB bank allows you to deposit up to $5,000 and they claim it takes about 30 minutes for the confirmation.

If you simply want to buy Bitcoin without worrying about the price, then Cointree will be the best option for you.

If you understand the bitcoin market then their business model may not be the best fit for you. For example, I put through 1 bank transfer order during the dip, however by the time it took for my funds to clear I ended up purchasing at a higher rate. My poli transfer was "instant" but locked in the account for 2 days, not allowing me to withdraw the funds and trade in the exchanges. (This was depressing). This is unlike independent reserve who let you buy and withdraw instantly.

Cointree support seems to be very responsive, live chat is always online however I found to be rather rude and unhelpful.

If you do decide to end up using Cointree please use my referral link https://www.cointree.com.au/?r=3300


In summary, if you want the best price and just to rely on someone, use the Cointree Poli instant payment. That way you can lock in a price which is generally slightly lower than other vendors. Don't bother with their Bank Transfer or you will take the risk of a price hike like I did.

Use Independent Reserve if you want quick bitcoin to take to the exchanges.

As always, DYOR and good luck!

]]>
<![CDATA[Script for creating EBS persistent volumes in OpenShift/Kubernetes]]>If you aren't using the automated dynamic volume provisioning (which you should!). Here is a short bash script to help you automatically create both the EBS volume and Kubernetes persistent volume:

#!/bin/bash

if [ $# -ne 2 ]; then  
    echo "Usage: sh create-volumes.sh SIZE COUNT"
    exit
fi

for i in `seq
]]>
https://www.andrewklau.com/script-for-ebs-persistent-volumes-in-openshift/e61f974b-6fde-46e5-bac5-89b0754af6baTue, 11 Apr 2017 05:03:36 GMTIf you aren't using the automated dynamic volume provisioning (which you should!). Here is a short bash script to help you automatically create both the EBS volume and Kubernetes persistent volume:

#!/bin/bash

if [ $# -ne 2 ]; then  
    echo "Usage: sh create-volumes.sh SIZE COUNT"
    exit
fi

for i in `seq 1 $2`; do  
  size=$1
  vol=$(ec2-create-volume --size $size --region ap-southeast-2 --availability-zone ap-southeast-2a --type gp2 --encrypted | awk '{print $2}')
  size+="Gi"

  echo "
  apiVersion: v1
  kind: PersistentVolume
  metadata:
    labels:
      failure-domain.beta.kubernetes.io/region: ap-southeast-2
      failure-domain.beta.kubernetes.io/zone: ap-southeast-2a
    name: pv-$vol
  spec:
    capacity:
      storage: $size
    accessModes:
      - ReadWriteOnce
    awsElasticBlockStore:
      fsType: ext4
      volumeID: aws://ap-southeast-2a/$vol
    persistentVolumeReclaimPolicy: Delete" | oc create -f -
done  
]]>
<![CDATA[WordPress editor missing when using CloudFront]]>We often put CloudFront in front of our WordPress sites to increase the load times of the website significantly.

CloudFront and WordPress have a few quirks, the main one will be the missing rich post/page editor that suddenly goes missing from your wp-admin.

The issue comes down to the

]]>
https://www.andrewklau.com/wordpress-editor-missing-when-using-cloudfront/ef572ca6-de07-4bc6-822f-d9525167ee7eSun, 02 Apr 2017 00:28:51 GMTWe often put CloudFront in front of our WordPress sites to increase the load times of the website significantly.

CloudFront and WordPress have a few quirks, the main one will be the missing rich post/page editor that suddenly goes missing from your wp-admin.

The issue comes down to the UA sniffing that WordPress does.

Adding this into your functions.php will be a good quick fix for you

/**
* Ignore UA Sniffing and override the user_can_richedit function
* and just check the user preferences
*
* @return bool
*/
function user_can_richedit_override() {  
    global $wp_rich_edit;

    if (get_user_option('rich_editing') == 'true' || !is_user_logged_in()) {
        $wp_rich_edit = true;
        return true;
    }

    $wp_rich_edit = false;
    return false;
}

add_filter('user_can_richedit', 'user_can_richedit_override');  
]]>
<![CDATA[Quickly build your own CentOS 6 & 7 AMI]]>Following on from my previous post Roll your own CentOS 6.5 HVM AMI in less than 15 minutes here's the snippet to for booting into an automated kickstart install for building your new AMI.

This also works for other VM providers such as Azure.

Don't forget to replace the

]]>
https://www.andrewklau.com/quickly-build-your-own-centos-7-ami/8d4f93bb-baec-450a-8409-0bfd532e6a25Wed, 30 Mar 2016 21:50:21 GMTFollowing on from my previous post Roll your own CentOS 6.5 HVM AMI in less than 15 minutes here's the snippet to for booting into an automated kickstart install for building your new AMI.

This also works for other VM providers such as Azure.

Don't forget to replace the version and appropriate snippets with your own.

version=<%= @osver %>  
mirror=http://mirror.centos.org/centos/

# Detect primary root drive
if [ -e /dev/xvda ]; then  
  drive=xvda
elif [ -e /dev/vda ]; then  
  drive=vda
elif [ -e /dev/sda ]; then  
  drive=sda
fi

yum -y install wget  
mkdir /boot/centos  
cd /boot/centos  
wget ${mirror}/${version}/os/x86_64/isolinux/vmlinuz  
wget ${mirror}/${version}/os/x86_64/isolinux/initrd.img

cat > /boot/centos/kickstart.ks << EOL  
<%= snippet 'centos_kickstart' %>

%post
<%= snippet 'centos_post' %>  
<%= snippet 'centos_post_finish' %>  
%end
EOL

if [ ${version} == 6 ]; then  
  echo "
  default         0
  timeout         0
  hiddenmenu

  title CentOS 6 Installation
          root (hd0,0)
          kernel /boot/centos/vmlinuz ip=dhcp ksdevice=eth0 ks=hd:${drive}1:/boot/centos/kickstart.ks method=${mirror}/${version}/os/x86_64/ lang=en_US keymap=us
  initrd /boot/centos/initrd.img " > /boot/grub/grub.conf
else  
  echo "menuentry 'centosinstall' {
          set root='hd0,msdos1'
      linux /boot/centos/vmlinuz ip=dhcp ksdevice=eth0 ks=hd:${drive}1:/boot/centos/kickstart.ks method=${mirror}/${version}/os/x86_64/ lang=en_US keymap=us
          initrd /boot/centos/initrd.img
  }" >> /etc/grub.d/40_custom

  echo 'GRUB_DEFAULT=saved
  GRUB_HIDDEN_TIMEOUT=
  GRUB_TIMEOUT=2
  GRUB_RECORDFAIL_TIMEOUT=5
  GRUB_CMDLINE_LINUX_DEFAULT="quiet nosplash vga=771 nomodeset"
  GRUB_DISABLE_LINUX_UUID=true' > /etc/default/grub

  grub2-set-default 'centosinstall'
  grub2-mkconfig -o /boot/grub2/grub.cfg
fi  
reboot  
]]>
<![CDATA[Cleaning Up Docker Host]]>Sometimes testing docker images consumes a lot of diskspace depending on what you're using, if you're like me and don't have a brand new 1TB SSD then we can't afford to have diskspace wasted.

Here's a quick 3 line command to:
1. Stop all docker contaienrs
2. Remove all docker

]]>
https://www.andrewklau.com/cleaning-up-docker-host/e0bca385-5eba-4a19-8da7-c7db72fa6f4fWed, 08 Apr 2015 03:45:43 GMTSometimes testing docker images consumes a lot of diskspace depending on what you're using, if you're like me and don't have a brand new 1TB SSD then we can't afford to have diskspace wasted.

Here's a quick 3 line command to:
1. Stop all docker contaienrs
2. Remove all docker containers
3. Remove all docker images

docker stop $(docker ps -a -q)  
docker rm $(docker ps -a -q)  
docker rmi $(docker images -q)  

Now we can start fresh and try or build new docker containers. Also note, if you created external volume mount folders these commands won't remove them.

]]>
<![CDATA[Error response from daemon: Cannot start container (....) (exit status 1)]]>This is a fairly generic error message, but it may help some that their iptable rules may be conflicting.

Stop using lokkit!

If you use lokkit for your iptable rules, you'll overwrite the docker rules so you'll have to service docker restart and try again.

Now the docker firewall rules

]]>
https://www.andrewklau.com/error-response-from-daemon-cannot-start-container-exit-status-1/9c3b932e-2040-4de5-baea-e8700aae5cb6Wed, 08 Apr 2015 03:42:20 GMTThis is a fairly generic error message, but it may help some that their iptable rules may be conflicting.

Stop using lokkit!

If you use lokkit for your iptable rules, you'll overwrite the docker rules so you'll have to service docker restart and try again.

Now the docker firewall rules should be loaded and you can start your docker container!

]]>
<![CDATA[FATA[0000] Error response from daemon: Cannot start container a788e23879a4257918008b62bd6bfdaceb69cb6364180d1259c1348df0a4bd91: failed to find the cgroup root]]>Today after a reboot on a fresh CentOS 6.6 docker host, the containers were failing to startup with the error message:

FATA[0000] Error response from daemon: Cannot start container xxxxxxx: failed to find the cgroup root

Fix seems simple enough, the cgroup services aren't running:

[root@xx ~]# /etc/
]]>
https://www.andrewklau.com/fata0000-error-response-from-daemon-cannot-start-container-a788e23879a4257918008b62bd6bfdaceb69cb6364180d1259c1348df0a4bd91-failed-to-find-the-cgroup-root/eaf2a33c-fa52-48e6-ba46-5f2bd33e1463Tue, 07 Apr 2015 23:10:21 GMTToday after a reboot on a fresh CentOS 6.6 docker host, the containers were failing to startup with the error message:

FATA[0000] Error response from daemon: Cannot start container xxxxxxx: failed to find the cgroup root

Fix seems simple enough, the cgroup services aren't running:

[root@xx ~]# /etc/init.d/cgred status
cgred is stopped  
[root@xx ~]# /etc/init.d/cgred start
Starting CGroup Rules Engine Daemon:                       [  OK  ]  
[root@xx ~]# /etc/init.d/cgconfig status
Stopped  
[root@xx ~]# /etc/init.d/cgconfig start
Starting cgconfig service:                                 [  OK  ]

[root@xx ~]# /etc/init.d/docker restart
Stopping docker:                                           [  OK  ]  
Starting docker:                                       [  OK  ]

[root@xx ~]# docker run ....

Now run your containers and you should be all good to go :)

Hope this may help someone who stumbles across this. Remember to enable the services so it doesn't happen on next reboot!

chkconfig cgconfig on  
chkconfig cgred on  

Docker is fun :)

]]>
<![CDATA[Installing Google Music Manager on Fedora 21]]>If you're wondering why Google Music Manager isn't starting after you installed it from their website, here's the problem:

$ google-musicmanager 
/usr/bin/google-musicmanager: error while loading shared libraries: libQtWebKit.so.4: cannot open shared object file: No such file or directory

How to resolve it,

yum provides '*/libQtWebKit.so.
]]>
https://www.andrewklau.com/installing-google-music-manager-on-fedora-21/e0ec0700-fe7f-45d9-b3ae-fb083aa8c9a0Sat, 03 Jan 2015 11:13:01 GMTIf you're wondering why Google Music Manager isn't starting after you installed it from their website, here's the problem:

$ google-musicmanager 
/usr/bin/google-musicmanager: error while loading shared libraries: libQtWebKit.so.4: cannot open shared object file: No such file or directory

How to resolve it,

yum provides '*/libQtWebKit.so.4'  
Repo        : fedora  
Matched from:  
Filename    : /usr/lib/libQtWebKit.so.4  
Filename    : /usr/lib/sse2/libQtWebKit.so.4



qtwebkit-2.3.4-1.fc21.x86_64 : Qt WebKit bindings  
Repo        : fedora  
Matched from:  
Filename    : /usr/lib64/libQtWebKit.so.4



qtwebkit-2.3.4-1.fc21.i686 : Qt WebKit bindings  
Repo        : @fedora  
Matched from:  
Filename    : /usr/lib/libQtWebKit.so.4  
Filename    : /usr/lib/sse2/libQtWebKit.so.4

Solution:

sudo yum install qtwebkit qtwebkit-devel  

Now it should load for you :) Would be damn helpful if Google just included it as a dependency with the rpm..

However once it's loaded, the problems aren't gone yet. There seems to be ANOTHER bug, with actually getting it to load, the current workaround seems to be, empty your Music directory first (or choose an EMPTY directory) to first upload your files. Don't worry if it says it failed to upload or there are less than 10 songs in the directory. Follow through and then reopen it again, and finally it'll load what we wanted to see.

If you don't follow these steps, you'll find you can't open it again..

Finally, if it goes missing again, check your message icon tray as it's probably just minimized itself. Windows Key + M, and you'll see that tiny headphone icon tucked away in the bottom right corner.

Enjoy the music

]]>
<![CDATA[My Fedora 21 Gaming Rig using VT-D and VFIO without compromise!!]]>Fedora 21 was released recently, and naturally it was a good excuse to buy a new gaming rig, am I right?

Previously, I was happy with gaming on Linux with my current favourite, Dota 2 but new games were coming out which I just wanted to try, but, winblows...

So

]]>
https://www.andrewklau.com/my-fedora-21-gaming-rig-using-vt-d-and-vfio-without-compromise/f722f079-859a-489d-9fa6-03d24463f7c9Sat, 03 Jan 2015 01:00:00 GMTFedora 21 was released recently, and naturally it was a good excuse to buy a new gaming rig, am I right?

Previously, I was happy with gaming on Linux with my current favourite, Dota 2 but new games were coming out which I just wanted to try, but, winblows...

So here's how I got setup with a Windows 8 VM, with VT-D passthrough, with almost the same level of performance.

Hardware:

  • Asrock H97M PRO4 ($105)
    • Asrock has better "support" for Linux, where as the popular Asus will full shut you down if you mention Linux.
  • EVGA Geforce GTX 750TI ($169)
    • Best output display ports, and small form factor.
  • Intel i5-4590 ($249)
    • Best price per performance other than the G3258, but the i5 is required for Vt-D
  • Corsair CS550M 80+ Gold PSU ($119)
    • PSU is what you should never skimp on. The power savings of using a 80+ GOLD, will pay for itself after 2-3 years, so it's a no brainer..
  • Samsung 830 Series 128GB SSD (reused)
    • I reused the SSD from my old rig
  • Fractal Design Core 1100 Mini Tower Case ($54)
    • Nice small case which looks decent and was cheap, it's internals aren't that bad but cable management is a little annoying.
  • Kingston HyperX Fury Blue 1886Mhz Desktop Ram (4GBx2) ($95)
    • RAM is RAM now a days, but I definitely need another 8GB as VMs are memory intensive with their overhead. Note, the MB is 1600Mhz but the price for 1886Mhz is only $1 extra..

Total: $791 AUD (from MSY)

Here are just some of my own personal tweaks and package installation. After the Fedora 21 installer has finished (which btw, has improved a lot since 20 -- thank you!!)

sudo -i  
echo 'vm.swappiness = 10' >> /etc/sysctl.conf

yum -y update  
# Download Google Chrome here

sudo yum localinstall --nogpgcheck http://download1.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm http://download1.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm  -y

# Install packages
sudo yum update  -y  
sudo yum install python pyxdg pygobject2 pylast gstreamer-python notify-python dbus-python gstreamer1-plugins-good gstreamer1-plugins-bad-free gstreamer1-plugins-ugly gstreamer-plugins-good gstreamer-plugins-bad gstreamer-plugins-ugly python-setuptools python-distutils-extra git  -y  
sudo yum install nano tmux pithos  -y

# Enable TRIM
cat <<'EOF' > /etc/cron.daily/fstrim.sh  
#! /bin/sh  

# By default we assume only / is on an SSD. 
# You can add more SSD mount points, separated by spaces.
# Make sure all mount points are within the quotes. For example:
# SSD_MOUNT_POINTS='/ /boot /home /media/my_other_ssd'  

SSD\_MOUNT\_POINTS='/ /home'  

for mount_point in $SSD_MOUNT_POINTS  
do  
    fstrim $mount_point  
done  
EOF  
chmod +x /etc/cron.daily/fstrim.sh

# skype
sudo yum install lpf-skype  
lpf-skype  
sudo yum install alsa-plugins-pulseaudio.i686

# Use virt-preview for the latest goodies
cd /etc/yum.repos.d/  
wget http://fedorapeople.org/groups/virt/virt-preview/fedora-virt-preview.repo

yum update  
yum install @virtualization  

The magic begins here:

# Grab the EFI OVMF image
wget https://www.kraxel.org/repos/firmware.repo  
yum install edk2.git-ovmf-x64

# Give time for guests to shutdown when host is powering off
sed -i 's/#ON_SHUTDOWN=.*/ON_SHUTDOWN=shutdown/' /etc/sysconfig/libvirt-guests  
systemctl enable libvirt-guests  
systemctl enable libvirtd

# Note down the PCI numbers [xxxx:xxxx] of the NVIDIA cards
lspci -vvvvv  
lspci -nn

yum -y install nano  
nano /etc/default/grub  

Append to the end of the line: GRUB_CMDLINE_LINUX with:

intel_iommu=on pci-stub.ids=10de:1380,10de:0fbc,8086:0c0c,8086:8ca0  

Note, these values would come from the above lspci commands, they are comma seperated. I took all the NVIDIA ones. What this does, is prevents the host machine (Fedora 21) from claiming these PCI devices, leaving them "hanging" for our VM to later claim them.

YOURS WILL BE DIFFERENT

Now regenerate grub2 config, note EFI, if you didn't install with EFI, then it'll be in a different location (/boot/grub.cfg ?)

grub2-mkconfig -o /boot/efi/EFI/fedora/grub.cfg

reboot

Create a copy of our OVMF efi image.

cp /usr/share/edk2.git/ovmf-x64/OVMF-pure-efi.fd /var/lib/libvirt/images/win8-OVMF.fd  
restorecon -r /var/lib/libvirt/images/win8-OVMF.fd  
chmod 755 /var/lib/libvirt/images/win8-OVMF.fd  

When you've booted back up, fire up virt-manager (it's gotten better, trust me). Now create a VM and add the PCI cards (all NVIDIA devices), USB keyboard, Mouse etc.

Attach the firmware file as a USB device so selinux labels it correctly (/var/lib/libvirt/images/win8-OVMF.fd). I couldn't find any other way to relabel it correctly as they seem to be assigned per VM.

Attach virtio drivers as CD disk, you can find these here http://alt.fedoraproject.org/pub/alt/virtio-win/latest/images/bin/

Now we stopped our VM earlier because we wanted to make a few modifications with can't be done through virt-manager

virsh edit xyz


# Append the following sections (between the ...).
<domain type='kvm'>  
  <features>
    ...
    <kvm>
      <hidden state='on'/>
    </kvm>
    ...
  </features>
 ...
  <os>
    ...
    <loader type='pflash'>/var/lib/libvirt/images/win8-OVMF.fd</loader>
  </os>

# Make sure CPU mode is host-passthrough and the topology is the same.
<cpu mode='host-passthrough'/>  
   <topology sockets='1' cores='4' threads='1'/>
</cpu>

# Remove these lines
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>

    </hyperv>

# Remove hypervclock

  <clock offset='localtime'>
    ...
    <timer name='hypervclock' present='yes'/>
    ...
  </clock>

These edits load up our EFI OVMF image as our bios loader, allowing us to passthrough our NVIDIA card without gimping our own host PC's performance. More info at this amazing blog site http://vfio.blogspot.com.au/2014/08/primary-graphics-assignment-without-vga.html

Finally, the other changes allow us to bypass the stupid Nvidia's ignorence that blocks the Nvidia drivers from being loaded if it detects that your running it in a VM.

Now fire up the VM through virt manager, attaching your Winblows ISO and watch as it speeds through the install (if you're on an SSD) and get gaming friends!!

Two major issue I noticed, which seems they may be linked is 8GB ram is just not enough. Right now, it's out of my budget to upgrade with another 8GB, but unless your host is running nothing, you may find even if you allocated 4GB to your guest (windows 8), it may end up using close to 6GB~. If the host is running some memory hungry application like Chrome, it'll start hitting swap space, which has negative effects on your VMs performance. I'm not sure if they may be linked, but the other issue is I'm noticing horrible artifacts appearing in my games after about 60-90 minutes of game play. When I'm not hitting swap space, it doesn't seem to happen (so far).

Hope this helps..

If you want to read more, and haven't already gone to playing games. This site has so much interesting read, you'll be stuck there a few hours http://vfio.blogspot.com.au/

]]>
<![CDATA[My standing desk, one year later. Now 2015 man cave..]]>So it seems everytime people come to visit my place, there's a comment about my desk and the mandatory 2 minutes of staring followed by 8 minutes of questions. Here's a snapshot of what it looks like now, at Novemember 2014

It's a work in progress ever since I decided

]]>
https://www.andrewklau.com/my-standing-desk-one-year-later/db47d915-cac1-4c37-8565-d3d91bb2b30dFri, 02 Jan 2015 12:20:21 GMTSo it seems everytime people come to visit my place, there's a comment about my desk and the mandatory 2 minutes of staring followed by 8 minutes of questions. Here's a snapshot of what it looks like now, at Novemember 2014

It's a work in progress ever since I decided to try the concept early 2014. I think my main motivation was my Officeworks leather chair had started to lose it's padding and I felt my buns starting to get sore sitting all day and my lower back always felt weak. But rather than being any other normal person and going out on the hunt for a new chair - well I stumbled upon the concept of standing desk and felt the urge to do it. Now if you know me, if there's something I want, I will get it... eventually, plus I was bored...

What I used:

  • 3x IKEA Black Lack Table, same ones used for the LackRack ($7 each).
  • 2x IKEA RAST Bedside table, one for keyboard and mouse. Second for vesa mount ($15 each)
  • 1x Vesa Mount Stand (2 monitor stand for $50)
  • Mikrotik RB2011UAS-RM ($1xx from Duxtel)
  • 2x Wood planks, various sizes (Few bucks from Bunnings)
  • Large Standing Mat from Imagemats ($50~)

Original Standing Desk

2014 Standing (Post VESA mount install), the three phone books served me well! On the left, were some boxes to prop up my Thinkpad Yoga which connected to the One Link dock.

I used the three Lack tables, stacked up on top of my old long traditional wood table. I actually bought a table top piece from the IKEA "as-is" section, and combined placed it ontop of two smaller tables to give me a total of 2 meters of desk space.

For the monitor stands, I didn't trust the cardboardy Lack tables to mount my two brand new IPS screens (not in above image), so using one of the RAST bedside tables, a more solid feeling construction, along with a plank of wood to give the stand more to grip, this gave me more table space for storage, phones, HDDs etc.

The second RAST table, I put face down as a base for my keyboard and mouse. This is so useful, because I often found myself leaning against it, and it feels so sturdy and solid.

The other bits and pieces, you can see my Mikrotik rooter sits snuggly in between the legs, no screws at all. It just sits there because it's so damn light. Finally, the standing mat is a MUST have, if you don't have one, you'll give up. Trust me, it's worth the investment, don't skimp on a cheap low quality one.

2015 - Man Cave

So this is my setup now, for the new year, after 1 full week of cleaning up and multiple vists to the recycling center (there was too much crap for hard rubbish).

Standing Desk 2015

2015 Standing

New Things:

  • Dicksmith 40" TV (Discounted for $219)
  • Chromecast (Got it free with the Moto G 2nd gen I bought for my mum)

Other than the TV, I just rearranged my area and merged it with my workoutspace to create a more relaxed environment. I've now replaced my Laptop with a more powerful desktop where I had fun with VT-D and creating a winblows VM for gaming. I'm happy with Dota 2 on linux, but the variety on Winblows is nice. Plus, with the use of a simple xrandr script, I can very easily switch between my standing desk and a chair without the hassle of changing too much. Right now, one script and the input button on my monitor. More on my VT-D fun soon!

Hope this inspires someone to try a standing desk in the new year, I've found my lower back has improved dramatically and I can finally get back to running instead of just cycling.

Happy New Year!

]]>
<![CDATA[OpenShift in Australia with AusNimbus]]>Platform as a Service is growing in demand, over the new year holidays we finally had the time to finish our website after months of private beta.

Check it out, AusNimbus has finally got a website and accepting public signup with a 7 day free trial.

When you signup you'll

]]>
https://www.andrewklau.com/openshift-in-australia-with-ausnimbus/be942953-750a-4ea2-84f8-fb885356080bThu, 01 Jan 2015 01:00:00 GMTPlatform as a Service is growing in demand, over the new year holidays we finally had the time to finish our website after months of private beta.

Check it out, AusNimbus has finally got a website and accepting public signup with a 7 day free trial.

When you signup you'll find a complete different experience if you've been using OpenShift Online, as we've done a full design overhall including integrated domain registration and management included as part of the service. Which means you can register, create and manage your domain and application all from one place.

Pricing starts from $10 AUD/month for standard gears, billed per hour. This includes 512 RAM + 1GB SSD Storage.

]]>
<![CDATA[Installing PositiveSSL on Apache (and on AWS cloudfront)]]>PositveSSL is that cheap SSL cert which we all get for peanuts from Namecheap, there's no shame in hiding that. However, installing it properly always seems to be misguided and Comodo's website is just horrible...

Here's all you need to do. Your zip file should contain four .crt files:

  • AddTrustExternalCARoot.
]]>
https://www.andrewklau.com/installing-positvessl-on-apache-the-working-way/f73c5aa3-eaa1-48a5-9feb-36d13c726599Tue, 30 Dec 2014 10:43:48 GMTPositveSSL is that cheap SSL cert which we all get for peanuts from Namecheap, there's no shame in hiding that. However, installing it properly always seems to be misguided and Comodo's website is just horrible...

Here's all you need to do. Your zip file should contain four .crt files:

  • AddTrustExternalCARoot.crt
  • COMODORSAAddTrustCA.crt
  • COMODORSADomainValidationSecureServerCA.crt
  • domain.crt

For browers to trust you properly, you need to provide the intermediate certificate WITH your certificate. Putting it in just the chain seems to not be enough, so your cert AND chain file should end up being this combined.crt

cat domain.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt > combined.crt

Note, AddTrustExternalCARoot.crt is not recommended to be included.

So finally, our Apache config should be something like:

  SSLCertificateFile /etc/pki/tls/certs/combined.crt
  SSLCertificateKeyFile /etc/pki/tls/private/domain.key
  SSLCertificateChainFile /etc/pki/tls/certs/combined.crt

You'll probably want to do your own research to determine the ideal cipher methods too.

Hope that helped some of you, as I spent a bit of time puzzled why many people were giving the wrong steps.

When in doubt, this site is the best to verify you have anything setup properly:
https://ssltools.geotrust.com/checker/views/certCheck.jsp

Happy New Year!

UPDATE:

If you try this method on AWS, it will error back with something like:

A client error (MalformedCertificate) occurred when calling the UploadServerCertificate operation: Unable to validate certificate chain. The certificate chain must start with the immediate signing certificate, followed by any intermediaries in order. The index within the chain of the invalid certificate is: -1  

To get it working with AWS, it expects a PEM format and the SSLCertificate to be by itself. So this should get you fixed up:

(openssl x509 -inform PEM -in COMODORSADomainValidationSecureServerCA.crt; openssl x509 -inform PEM -in COMODORSAAddTrustCA.crt) > ca.crt

Then:

aws iam upload-server-certificate --server-certificate-name www.domain.com.au --certificate-body file:////domain_com_au.crt --private-key file:///domain_com_au.key --certificate-chain file:///ca.crt --path /cloudfront/www.domain.com.au/  
]]>
<![CDATA[How to build a custom CentOS DigitalOcean image]]>Unified instance images are critical to a sound solid cloud deployment. There's nothing more annoying than having to manage the dozens of different types of images just because a different provider enforces their default image on you.

While Digital Ocean doesn't seem to support custom images, kickstarts or kernel parmeters

]]>
https://www.andrewklau.com/how-to-build-a-custom-centos-digitalocean-image/3547366f-0551-4b7f-be2f-494d0fcd88cfMon, 01 Dec 2014 16:36:39 GMTUnified instance images are critical to a sound solid cloud deployment. There's nothing more annoying than having to manage the dozens of different types of images just because a different provider enforces their default image on you.

While Digital Ocean doesn't seem to support custom images, kickstarts or kernel parmeters here's how I managed to do a full kickstart install!

The snippet below can be run as a single bash script. This works by first downloading the vmlinuz and initrd.img which are the minimum required to start the CentOS installer.

Next we define our kickstart file, this is optional but it allows us to automate the steps. Espically the %post section which defines the most important section regarding partition labeling.

After we've defined our kickstart, we then boot into the CentOS installer using kexec. The use of the $IP, $NETMASK and $GATEWAY variable is pulled from the metadata service, but can be substituted with hand coded values.

Once it's done, sit back and watch the install go over the remote console. You will not require any further input.

yum -y install wget curl

mkdir /boot/centos  
cd /boot/centos  
wget http://mirror.centos.org/centos/6/os/x86_64/isolinux/vmlinuz  
wget http://mirror.centos.org/centos/6/os/x86_64/isolinux/initrd.img

cat <<'EOL' > /boot/centos/kickstart.ks

skipx  
text  
install  
url --url=http://mirror.centos.org/centos/6/os/x86_64  
firewall --enabled --service=ssh  
repo --name="Base" --baseurl=http://mirror.centos.org/centos/6/os/x86_64/

rootpw  --iscrypted $6$lApTqNOAYmyCrIfy$dXt9vKgMGihzZZniafkcHyMf/QzM7iSDmcLwEVcO.IewBP0EX9HVCJJrMXsv1u2Er568sma/jdPi4dcOFDvXA0  
authconfig --enableshadow --passalgo=sha512

# System keyboard
keyboard us  
# System language
lang en_US.UTF-8  
# SELinux configuration
selinux --enforcing

# System services
services --enabled="network,sshd,rsyslog,tuned,acpid"  
# System timezone
timezone Australia/Melbourne  
# Network information
network  --bootproto=dhcp --device=eth0 --onboot=on  
#network  --bootproto=dhcp --device=eth1 --onboot=on
bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet"  
zerombr  
clearpart --all --drives=vda  
part / --fstype=ext4 --size=1000 --grow  
#part swap --size=512
reboot

%packages --nobase
@core
@server-policy
%end

%post

# DigitalOcean customization
touch /etc/digitalocean

# DO sets the kernel param root=LABEL=DOROOT
e2label /dev/vda1 DOROOT  
sed -i -e 's?^UUID=.* / .*?LABEL=DOROOT     /           ext4    defaults,relatime  1   1?' /etc/fstab

sync;

%end
EOL

# Enable swap partition if less than 800mb RAM
totalmem=$(cat /proc/meminfo | grep MemTotal | awk '{ print $2 }')  
if [ $totalmem -lt 800000 ]; then  
  sed -i 's/#part swap --size=512/part swap --size=512/g' /boot/centos/kickstart.ks 
fi


# Booting into vmlinuz
yum install -y kexec-tools

IP=$(curl http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/address)  
NETMASK=$(curl http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/netmask)  
GATEWAY=$(curl http://169.254.169.254/metadata/v1/interfaces/public/0/ipv4/gateway)

sed -i "s/network  --bootproto=dhcp --device=eth0 --onboot=on/network  --bootproto=static --device=eth0 --onboot=on --ip=$IP --netmask=$NETMASK --gateway=$GATEWAY --nameserver 8.8.8.8,8.8.4.4/g"1 /boot/centos/kickstart.ks

kexec -l /boot/centos/vmlinuz --initrd=/boot/centos/initrd.img  --append="ip=$IP netmask=$NETMASK gateway=$GATEWAY dns=8.8.8.8 ksdevice=eth0 ks=hd:vda1:/boot/centos/kickstart.ks method=http://mirror.centos.org/centos/6/os/x86_64/ lang=en_US keymap=us"  
sleep 2  
kexec -e  

After it's installed, you will face an issue with the eth0 interface not coming up. This is related to the different kernels provided, you will need to power off the instance and select the latest CentOS kernel, making sure it matches the one installed on your system. You'll notice this when you see:
Device eth0 does not seem to be present, delaying initialization

Now you should be good to go with a fresh CentOS install, most importantly with Selinux enabled! Package it up as a snapshot using the DigitalOcean control panel and let lose with your prebuilt custom CentOS images.

If you use foreman like I do, there's a plugin available to add DigitalOcean as a compute resource. It still appears to have some issues but two lines will get you going:

yum install ruby193-rubygem-foreman_digitalocean  
service foreman restart  

Your API keys will be available https://cloud.digitalocean.com/apiaccess to add to the computeresources section.

If you intend to use this as a snapshot, a few extra things you may consider adding to your %post section

yum -y install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm  
yum -y install https://yum.puppetlabs.com/puppetlabs-release-el-6.noarch.rpm

yum -y install cloud-init nano screen ntp ntpdate curl wget dracut-modules-growroot acpid tuned puppet openssh-clients

yum -y update

# make sure firstboot doesn't start
echo "RUN_FIRSTBOOT=NO" > /etc/sysconfig/firstboot

# set virtual-guest as default profile for tuned
echo "virtual-guest" > /etc/tune-profiles/active-profile


# Fix hostname on boot
sed -i -e 's/\(preserve_hostname:\).*/\1 False/' /etc/cloud/cloud.cfg  
sed -i '/HOSTNAME/d' /etc/sysconfig/network  
rm /etc/hostname

# Remove all mac address references
sed -i '/HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth0  
sed -i '/HOSTNAME/d' /etc/sysconfig/network-scripts/ifcfg-eth0  
sed -i '/UUID/d' /etc/sysconfig/network-scripts/ifcfg-eth0

sed -i '/HWADDR/d' /etc/sysconfig/network-scripts/ifcfg-eth1  
sed -i '/HOSTNAME/d' /etc/sysconfig/network-scripts/ifcfg-eth1  
sed -i '/UUID/d' /etc/sysconfig/network-scripts/ifcfg-eth1

yum clean all  
rm -rf /etc/ssh/*key*

rm -f /root/.ssh/*  
rm -f /home/cloud-user/.ssh/*

rm -f /etc/udev/rules.d/*-persistent-*  
touch /etc/udev/rules.d/75-persistent-net-generator.rules  

This took me some time to hack together, but I'm happy to get this working as we can now add DO to our list of available hosting providers.

Finally, the EL6 version of cloud-init does not seem to support the DigitalOcean meta datastore, while the same version EL7 release seems to.
I haven't dug into it further but if you use cloud-init you should set the datastore to DigitalOcean, even though el6 doesn't support it. It will fall back to none and prevent the 120 seconds timeout set by cloud-init to query the incorrect datastore (EC2 by default).

echo 'datasource_list: [ DigitalOcean, None ]  
datasource:  
 DigitalOcean:
   retries: 5
   timeout: 10

' >> /etc/cloud/cloud.cfg

For el6, to obtain the SSH key, something simple like this may work:

curl http://169.254.169.254/metadata/v1/public-keys > /home/cloud-user/.ssh/authorized_keys  

If this helped and you do end up giving DigitalOcean a try, sign up with my referal link https://www.digitalocean.com/?refcode=afbafb6012b6 I get $25 credits and you get free $10 just for using the link!

Happy hacking!

]]>
<![CDATA[Roll your own CentOS 6.5 HVM AMI in less than 15 minutes]]>Update: Selinux sometimes does not install properly unless you have at least 1024MB combined memory (physical and swap).

So after attending the AWS summit 2014 in Melbourne, I was sold on the benefits it brought vs pursuing the hassle of racking hardware and configuring the low level functions. While I

]]>
https://www.andrewklau.com/roll-your-own-centos-6-5-hvm-ami-in-less-than-15-minutes/76146835-84c2-4dc8-8110-b91662dc6162Fri, 15 Aug 2014 05:03:56 GMTUpdate: Selinux sometimes does not install properly unless you have at least 1024MB combined memory (physical and swap).

So after attending the AWS summit 2014 in Melbourne, I was sold on the benefits it brought vs pursuing the hassle of racking hardware and configuring the low level functions. While I find that fun and interesting.. it's probably the most time consuming effort and then to compare that up to the performance you get from AWS.. it's hard to compete..sure you can get refurb hardware, but that doesn't beat the performance AWS can give you. Enough rambling..

We all know the flaws of the marketplace centos image, but the big killer was it doesn't support HVM. I want the new t2 instance types!!! Then the community third party AMIs, are riddled with random stuff I don't know what they've done.. custom repos, disabled selinux!?!?!?! I stopped there..

Lets cut the crap, here's all you need to do:

# Create a new instance
# Just choose any CentOS 6 image with HVM and EBS support. I used  RightImage_CentOS_6.5_x64_v13.5.2_HVM_EBS (ami-45950b7f)
# Now login, RightImage seems to use root as the account

mkdir /boot/centos  
cd /boot/centos  
wget http://mirror.centos.org/centos/6/os/x86_64/isolinux/vmlinuz  
wget http://mirror.centos.org/centos/6/os/x86_64/isolinux/initrd.img

echo '  
default         0  
timeout         0  
hiddenmenu

title CentOS 6 VNC Installation  
        root (hd0,0)
        kernel /boot/centos/vmlinuz vnc vncpassword=yourvncpassword ip=dhcp xen_blkfront.sda_is_xvda=1 ksdevice=eth0 ks=https://gist.githubusercontent.com/andrewklau/9c354a43976d951bdedd/raw/266f8acc3c4a09af0cde273e6046bd9fc26ca9ea/centosami.ks method=http://mirror.centos.org/centos/6/os/x86_64/ lang=en_US keymap=us
initrd /boot/centos/initrd.img ' > /boot/grub/menu.lst

reboot  

Boom! That's it, let it go and it'll install and shutdown once it's finished. Mine took about 5 minutes, once it's done right click and press "Create Image".. that's your AMI there. A few tips, go modify the kickstart to fit your requirements, some of the key things I've got in there are all commented.

You probably also noticed the vnc and vncpassword parameter in the grub config. If you want, you can follow along with the install by connecting your vnc client to ipaddress:1 aka ipaddress:5901

I used to do these installs on my physical boxes all the time, why bother spring up that ugly java dell/ibm KVM interface when VNC looks colorful and without lag! Another thing to note, if your install seems to be taking a while.. your KS is probably borked and it's just hung. If you can ping your instance, but VNC hasn't come up yet, it's errored at a kickstart entry. I used a local VM to test my ks file first.

You can also start up a local VM and upload it as an AMI, but that's a painful process with working out all the APIs etc. I did it, and it took me a lot longer than 15 minutes. Eitherway, my new AMI just finished building so /endrant

]]>
<![CDATA[Controlling glusterfsd CPU outbreaks with cgroups]]>Some of you may that same feeling when adding a new brick to your gluster replicated volume which already has an excess of 1TB data already on there and suddenly your gluster server has shot up to 500% CPU usage. What's worse is when my hosts run along side oVirt

]]>
https://www.andrewklau.com/controlling-glusterfsd-cpu-outbreaks-with-cgroups/95acccb8-ee48-48bc-9ad7-e4229169f5f3Mon, 03 Feb 2014 23:49:14 GMTSome of you may that same feeling when adding a new brick to your gluster replicated volume which already has an excess of 1TB data already on there and suddenly your gluster server has shot up to 500% CPU usage. What's worse is when my hosts run along side oVirt so while gluster hogged all the CPU, my VMs started to crawl, even running simple commands like top would take 30+ seconds. Not a good feeling.

My first attempt I limited the NIC's bandwidth to 200Mbps rather than the 2x1Gbps aggregated link and this calmed glusterfsd down to a healthy 50%. A temporary fix which however meant clients accessing gluster storage would be bottlenecked by that shared limit.

So off to the mailing list - a great suggestion from James/purpleidea (https://ttboj.wordpress.com/code/puppet-gluster/) on using cgroups.

The concept is simple, we limit the total CPU glusterfsd sees so when it comes to doing the checksums for self heals, replication etc. They won't have the high priority which other services such as running VMs would have. This effectively slows down replication rate in return for lower CPU usage.

First make sure you have the package (RHEL/CentOS) libcgroup

Now you want to modify /etc/cgconfig.conf so you've got something like this (keep in mind comments MUST be at the start of the line or you may get parser errors):

mount {  
    cpuset  = /cgroup/cpuset;
    cpu = /cgroup/cpu;
    cpuacct = /cgroup/cpuacct;
    memory  = /cgroup/memory;
    devices = /cgroup/devices;
    freezer = /cgroup/freezer;
    net_cls = /cgroup/net_cls;
    blkio   = /cgroup/blkio;
}
group glusterfsd {  
        cpu {
# half of what libvirt assigns individual VMs (1024) - approximately 50% cpu share
                cpu.shares="512";
        }
        cpuacct {
                cpuacct.usage="0";
        }
        memory {
# limit the max ram to 4GB and 1GB swap
                memory.limit_in_bytes="4G";
                memory.memsw.limit_in_bytes="5G";
        }
}

group glusterd {  
        cpu {
# half of what libvirt assigns individual VMs (1024) - approximately 50% cpu share
                cpu.shares="512";
        }
        cpuacct {
                cpuacct.usage="0";
        }
        memory {
# limit the max ram to 4GB and 1GB swap
                memory.limit_in_bytes="4G";
                memory.memsw.limit_in_bytes="5G";
        }
}

Now apply the changes to the running service:
service cgconfig restart

What this has done is defined two cgroup groups (glusterfsd and glusterd). I've gone and assigned the CPU share of the group to half of what libvirt assigns a VM along with some fixed memory limits just in case. The important one here is cpu.shares.

One last thing to do is modify the services so they start up in the cgroups. You can easily do this manually, but the recommended way (according to Red Hat docs) was to modify /etc/sysconfig/service

[root@hv01 ~]# cat /etc/sysconfig/glusterd 
# Change the glusterd service defaults here.
# See "glusterd --help" outpout for defaults and possible values.

#GLUSTERD_LOGFILE="/var/log/gluster/gluster.log"
#GLUSTERD_LOGLEVEL="NORMAL"

CGROUP_DAEMON="cpu:/glusterd cpuacct:/glusterd memory:/glusterd"  
[root@hv01 ~]# cat /etc/sysconfig/glusterfsd
# Change the glusterfsd service defaults here.
# See "glusterfsd --help" outpout for defaults and possible values.

#GLUSTERFSD_CONFIG="/etc/glusterfs/glusterfsd.vol"
#GLUSTERFSD_LOGFILE="/var/log/glusterfs/glusterfs.log"
#GLUSTERFSD_LOGLEVEL="NORMAL"

CGROUP_DAEMON="cpu:/glusterfsd cpuacct:/glusterfsd memory:/glusterfsd"  

Quick sum-up: We assign the gluster{d,fsd} service into the gluster{d,fsd} cgroup and define the resource groups we want to limit them to.

Now make sure cgconfig comes on at boot chkconfig cgconfig on

Ideally now, you should just send the host for a reboot to make sure everything's working the way it should.

When it comes back up, you can try the command cgsnapshot -s to see what your current rules are. -s will just ignore the undefined values.

Alternatively, before you define the "CGROUPDAEMON" in the sysconfig files shutdown the gluster services, then define "CGROUPDAEMON" and try start the gluster services again this should properly put them in the correct cgroups.

Note: I've only really tested this for a day - and so far I'm pretty impressed as the replication is no longer eating up my CPU and I haven't seen any performance drop in terms of read/write as all we've done is limited CPU and Memory. Bandwidth is untouched.

If you do your Google research you can also find the non-persistent method where you modify the files in /cgroup/ and create the groups there. I recommend doing that first to find the best config values for your systems.

For those interested, with my config values on a 2x Quad Core Server I cleaned out a brick and forced a re-replicate of the 1TB and glusterfsd happily chugged away at around 50% CPU and 200Mbps data transfer. I'm quite happy with that result, the obvious trade off cpu for replication rate is worth it for my scenario.

Please leave your suggestions/feedback and whether you found any possible ideal values for cgconfig.

HTH

Update (July 2014):

After a few months in using cgroups, I've removed the memory limits as gluster isn't that memory intensive. Similar to a comment as well, with a memory limit sometimes we hit an oom killer which is not great!

CPU performance DOES effect the read/write speed, so tweaking is required! The recent 3.5 version seems to be much better with CPU usage, making this appear to become obsolete. So kudos to the gluster devs!!

]]>