Bird’s memory usage with 8 BGP sessions (6 of which full BGP tables):

root@sfgw:~# birdc show memory
BIRD 1.5.0 ready.
BIRD memory usage
Routing tables:    160 MB
Route attributes:  136 MB
ROA tables:        112  B
Protocols:          68 kB
Total:             295 MB

And here are the prefix stats:

root@sfgw:~# bgp_states
BIRD                    1.5.0   ready.
bgp_itd_backup          BGP     main    up      2016-02-16      Established
  Routes:         576411 imported, 1 exported, 21079 preferred
bgp_itd_main            BGP     main    up      2016-02-16      Established
  Routes:         576411 imported, 1 exported, 0 preferred
bgp_telehouse_main      BGP     main    up      2016-02-19      Established
  Routes:         576876 imported, 1 exported, 217446 preferred
bgp_telehouse_backup    BGP     main    up      2016-02-19      Established
  Routes:         470907 imported, 1 exported, 286422 preferred
bgp_evolink_main        BGP     main    up      2016-02-19      Established
  Routes:         576112 imported, 1 exported, 57281 preferred
bgp_evolink_backup      BGP     main    up      07:06:39        Established
  Routes:         576112 imported, 1 exported, 0 preferred
bgp_evolink_bg_backup   BGP     main    up      10:55:36        Established
  Routes:         9767 imported, 1 exported, 0 preferred
bgp_evolink_bg_main     BGP     main    up      10:56:32        Established
  Routes:         9767 imported, 1 exported, 536 preferred
Posted by HackMan
Dated: 23rd February 2016
Filled Under: Networking, Technology, Uncategorized
Comments: Post the 1st one!

I finally decided to request full BGP tables from all of my ISPs, so I can easily change the preferred path to certain destinations.

However this meant that now I have to monitor both the state of the BGP sessions, but also the amount of routes that I receive from my neighbors.

Before my days with full BGP tables I relied on this simple alias:

alias bgp_states='birdc show protocols|awk "/^BIRD|bgp/{printf \"%20s\t%s\t%s\t%s\t%s\t%s\n\", \$1, \$2, \$3, \$4, \$5, \$6}"'
alias bgp_states='birdc show protocols|column -t|grep -E "^BIRD|bgp"'

The above alias(with its two implementations), worked like charm and produced:

root@sfgw:~# bgp_states
BIRD                  1.5.0   ready.
bgp_itd_backup        BGP     main    up     2016-02-16  Established
bgp_evolink_main      BGP     main    up     2016-02-16  Established
bgp_evolink_backup    BGP     main    up     2016-02-16  Established
bgp_itd_main          BGP     main    up     2016-02-16  Established
bgp_telehouse_main    BGP     main    up     18:08:55    Established
bgp_telehouse_backup  BGP     main    up     18:09:25    Established
root@sfgw:~#

However, with full BGP tables, I needed a little bit more information. So I replaced the above aliases with this function:


function bgp_states {
    for i in $(birdc show protocols|grep -E "^BIRD|bgp"|sed 's/\s\+/|/g'); do
        a=( ${i//|/ })
        echo ${a[*]}|awk '{printf "%-16s\t%s\t%s\t%s\t%s\t%s\n", $1, $2, $3, $4, $5, $6;}'
        birdc show protocol all ${a[0]}|grep Routes
    done
}

I was lazy and didn’t want to implement it properly with while read or IFS. Maybe I’ll have version two for this function, also.

So its results are a bit more informative:

root@sfgw:~# bgp_states
BIRD                    1.5.0   ready.
bgp_itd_backup          BGP     main    up      2016-02-16      Established
  Routes:         575148 imported, 1 exported, 20951 preferred
bgp_evolink_main        BGP     main    up      2016-02-16      Established
  Routes:         1 imported, 1 exported, 0 preferred
bgp_evolink_backup      BGP     main    up      2016-02-16      Established
  Routes:         1 imported, 1 exported, 0 preferred
bgp_itd_main            BGP     main    up      2016-02-16      Established
  Routes:         575148 imported, 1 exported, 0 preferred
bgp_telehouse_main      BGP     main    up      18:08:55        Established
  Routes:         575939 imported, 1 exported, 233100 preferred
bgp_telehouse_backup    BGP     main    up      18:09:25        Established
  Routes:         472013 imported, 1 exported, 324399 preferred
root@sfgw:~#
Posted by HackMan
Dated: 19th February 2016
Filled Under: Linux General, Networking, Technology, Uncategorized
Comments: Post the 1st one!

Since I’m a long user of Xchat I decided to upgrade it and found that it does not compile with the recent glib library.

The problem is that the new versions of the glib library introduced one limitation for includes. Now you only have to include glib.h and every more specific inclusion breaks the builds.

Even thou I recently switched to HexChat I made a patch for Xchat 2.8.8, so it can compile with the newer glib version.

The patch is available here.

Posted by HackMan
Dated: 18th February 2016
Filled Under: Linux General, Technology
Comments: Post the 1st one!

Very often I login on machines with older mii-tool that reports 1Gbit/s cards as if they are connected on 100Mbit/s. This is normal as mii-tool is depricated. But a lot of old timers like me are used to its output. So I wrote a very simple awk script which can convert the ethtool output to the one liner output of mii-tool

if [ -z "$1" ]; then
    echo "Usage: mii-tool DEV"
    exit
fi
ethtool $1 2>&1|awk '
/Settings/{d=$3}
/Speed/{
    s=$2
    if (s !~ /Unknown/) {
        gsub(/Mb.*/,"",s)
        s=s"baseTx-"
    }
}
/Duplex/{
    if ($2 == "Full")
        s=s"FD"
    else
        s=s"HD"
}
/Link/{
    if ($3 == "yes")
        l="link ok"
    else
        l="no link"
}
/No data available/{err=1}
END{
    if (err) {
        gsub(/:/,"",d)
        print "SIOCGMIIPHY on \""d"\" failed: No such device"
        exit
    }
    if (s!~ /Unknown/)
        print d," "s", "l
    else
        print d, l
}'

And a shorter version for your .bashrc:
function mii-tool { ethtool $1|awk '/Settings/{d=$3}/Link/{if($3=="yes")l="link ok";else l="no link"}/Speed/{s=$2}/Duplex/{u=$2}END{if(s!~/Unknown/)print d," "s", "u", "l;else print d,l}'}

Posted by HackMan
Dated: 7th December 2015
Filled Under: Linux General, Technology
Comments: Post the 1st one!

Recently I wanted to check if I have reached the limit of my network connection. However I constatly had to login on my router and check the traffic on each interface… and I had to watch it continiously.
.
So I decided that I’ll write a tool which will collect the information and regulary update a DB, from which I will create graphs… a few minutes later I remembered a good old peace of software called vnstat. I immediatelly installed vnstat on the router and also added jsvnstat for a nice web interface to vnstat.
.
However a problem remained… I needed to see the traffic live and constant update of a DB and regeneration of graphs is not a very modern approach. So I decided to see what I can do to see the actual network traffic stats at the moment they are collected.
.
This is how Placky was born. Now I can see the live statistics directly from my browser… even on a phone :)
.
If you are interested I have left my home placky interface open to everyone.

Posted by HackMan
Dated: 15th October 2015
Filled Under: Technology
Comments: Post the 1st one!

Today I decided to join my co-located server to pool.ntp.org.
It was surprisingly easy and now I have my machine contributing to the big effort that is pool.ntp.org.

Statistics for it can be found here: http://www.pool.ntp.org/user/hackman

Posted by HackMan
Dated: 15th September 2014
Filled Under: Technology
Comments: Post the 1st one!

Since I started using Linux::Unshare after I created my Linux::Setns I found that unshare was missing a few tests.

I added them and sent a patch to the current maintainer Boris Sukholitko. However a few days later he wrote to me that he is no longer maintaining the module and proposed that I should take over maintainership.

So since today, I’m the maintainer of Linux::Unshare.

I’m going to release version 0.04 tomorrow.
I also created a GitHub repo for Linux::Unshare.

Posted by HackMan
Dated: 28th July 2014
Filled Under: Technology, Uncategorized
Comments: Post the 1st one!

I’m preparing a talk for YAPC::EU. The talk will be about managing Linux containers in Perl and for doing that I needed some functionality that was currently missing ;)
So I created a new Perl Module - Linux::Setns.

This is my first attempt at building and supporting modules on CPAN… but I hope I’ll add a few more in the same area soon.

The code is kept in GitHub.

Posted by HackMan
Dated: 28th July 2014
Filled Under: Technology, getClouder
Comments: Post the 1st one!

Requirements:

wget https://www.kernel.org/pub/linux/kernel/v3.x/linux-3.12.24.tar.xz
wget http://hydra.azilian.net/3.12.24-config
cp 3.12.24-config .config
tar xfj linux-3.12.24.tar.xz
cd linux-3.12.24
git init
git add .
git commit -a -m 'initial'

Tasks:

  • Patch /proc/cpuinfo, so when you are not in the main CGroup, it will show you only the CPU cores you are allowed to use.
  • Patch /proc/stat, so when you are not in the main CGroup, it will show you only the CPU cores you are allowed to use.
  • Patch /proc/meminfo, so when you are not in the main CGroup, it will show you the total and free memory that is set to your current CGroup.
  • Patch /proc/partitions, so when you are not in the main CGroup, it will show you only the partitions/devices that you are allowed to use.

Tips:

  • File systems code is located under the ‘fs’ directory. So the proc file system code is located under ‘fs/proc’ directory.
  • When making your kernel use ‘-jX’ where X is the number of CPU cores you have +2. So if you have 4 cores, you should use -j6.
    make -jX
  • Control Groups are Linux mechanism to impose limits to group of processes instead of per-process as it is normal. We will use this functionalities, to set limits to some resources.
        cgroup - /
                 |- cpuset.cpus (used for the /proc/cpuinfo limit)
                 |- devices.allow (used to set the block device limit,
                 |   used for /proc/partitions limit)
                 |- devices.list (used to read the limits imposed using
                 |   devices.allow
                 |- memory.limit_in_bytes (used for the /proc/meminfo limit)
    
  • Finding the current control group you are in. The kernel exposes one global pointer ‘current’ which is a pointer to the process currently working. ‘current’ is of type task_struct. The task_struct structure is defined in ‘include/linux/sched.h’. You must find where the control groups code reside, but keep in mind that most of the functionality around control groups uses this type of structure ‘cgroup_subsys_state’ and it is regulary shortend to only ‘css’.

Testing your code

After successful compilation you should be able to boot a VM with your new kernel. There are a few commands that are the same for all tests:


mkdir /cgroup
mount -t cgroup none /cgroup
mkdir /cgroup/tt
echo 0 > /cgroup/tt/cpuset.cpus
echo 0 > /cgroup/tt/cpuset.mems
echo $$ > /cgroup/tt/tasks

Then for testing the cpuinfo:
cat /proc/cpuinfo

Then for testing the stat:
cat /proc/stat

Then for testing the meminfo:

echo 268435456 > /cgroup/tt/memory.limit_in_bytes
cat /proc/meminfo

Then for testing the partitions:

echo 'b 8:1 rw' > /cgroup/tt/devices.allow
cat /proc/partitions

Posted by HackMan
Dated: 21st July 2014
Filled Under: Teaching, Technology
Comments: Post the 1st one!

I needed an easy way to download videos I streamed to Twitch.tv, so I created this small script which downloads all parts of the video, converts them to mpegts and then combines them into a single mpeg flv video, ready for upload to YouTube.

#!/bin/bash

video_dir=~/twitch

if [ $# -ne 1 ]; then
        echo "Usage: $0 twitch_video_id"
        exit 0
fi

id=$1
video_urls=( $(curl http://api.justin.tv/api/broadcast/by_archive/${id}.xml?onsite=true | grep video_file_url | sed 's/.*url>\(http:.*\)< \/vid.*/\1/') )

if [ ! -d $video_dir ]; then
        mkdir $video_dir
fi

cd $video_dir

# download the videos
for i in ${video_urls[*]}; do
        wget -c $i
done

rm -f int*.ts
concat_list='concat:'
last_num=${#video_urls[*]}
let last_num--

# convert the videos to Mpeg TS video format
for i in $( seq 0 $last_num ); do
        ffmpeg -i ${video_urls[$i]/*\//} -c copy -bsf:v h264_mp4toannexb -f mpegts int${i}.ts
        if [ "$i" -eq 0 ]; then
                concat_list="${concat_list}int${i}.ts"
        else
                concat_list="$concat_list|int${i}.ts"
        fi
done

# merge the files togather
ffmpeg -f mpegts -i "$concat_list" -c copy for_youtube_${id}.flv
Posted by HackMan
Dated: 10th October 2013
Filled Under: Teaching, Technology
Comments: Post the 1st one!