Virtualisation comes in handy for a small-office – we can run a bunch of disparate services on limited hardware.
A second-hand laptop is used as the KVM host and is installed with Debian and Lib-virt. I like using laptops as servers because of the built-in UPS. However, it is important to take care of cooling and hard-disk reliability in laptops. For the purpose of cooling, some cheap (RM10) laptop cooler fans were installed.
Storage is off-loaded to a dedicated NAS server running FreeNAS. I opted for a custom assembled machine instead of an off-the-shelf NAS solution due to cost. The cost of building a NAS from parts either salvaged or bought from Lowyat is really cheap. The construction of the NAS deserves it’s own little blog entry.
Every effective engineering team works with the aid of a code repository. We run our own in-house git repository inside a single VM by using gitosis on a Debian system. The git-flow work-flow is used to facilitate the development process. A repository is essential for both managing and documenting code development.
There are lots of documents that need to be hosted within a company, which includes technical texts and light reading magazines. For this purpose, knowledge-tree is used to keep track of documents. The easiest way to install it is to use the pre-packaged Ubuntu Hardy packages. So, this was done inside a Hardy VM instead of a Debian one.
Things need to be monitored and this is done using Munin – the de-facto standard in network resource monitory. It generates a bunch of nice detailed graphs on things such as disk usage, cpu performance, memory consumption and network bandwidth. It even does this for virtualised machines. This can be easily integrated with other useful monitoring tools in the future. Setting this up on Debian is a snap. Just remember to enable the appropriate plugins.
The purpose of this resource monitoring is to collect data on the usage of the servers. This will be helpful when trying to factor in future upgrades to the local infrastructure. However, things will come nicely for now – except maybe the knowledge-tree server, which is quite resource heavy.
A transparent proxy is also used for the purpose of consolidating all Internet access. This proxy serves both the private local network and public wireless networks. This allows various filters such as DansGuardian and Privoxy to enforce network policy. The proxy server used is Squid, which is the world’s de-facto standard in web caching.
All companies should have some sort of network access policy. This is both for management and security purposes. This is one way of enforcing a network policy and the other way is via the DNS service.
Domain Name Service
All DNS is routed to a local dnsmasq server, which upstreams all requests to OpenDNS servers. OpenDNS was chosen because it allows fine-grained control over the kinds of sites allowed or blocked. At AESTE, the medium policy is chosen as default, which blocks porn, gambling, adware, malware and other generally NSFW stuff.
Having a local DNS server also means that local machines, particularly servers and networked devices, can be addressed using local names instead of IP addresses. The local DNS is not used for our Internet infrastructure, which is out-sourced to our registrar and/or hosting companies.
Since a lot of the work requires compilation of kernels, compilers and other complex software, the facility to distribute compilation across various machines becomes very helpful. All the desktops are installed with distcc, which allows compilation work to be spread across and exploits all the work-stations as a single massive compile farm.
That’s some of the key local infrastructure running in the office. It allows us to work with the Internet and with each other.
PS: Did I mention that there is a lot of second-hand hardware in the office? 🙂