Albrechts Blog

About programming, mostly.

Services in Ubuntu 16.04

Ubuntu since 14.04 switches to systemd to manage services. Time to learn something new.

Key to systemd is the command systemctl

Disable a service (so that it does not start when system reboots):
sudo systemctl disable mongodb

Enable a service (for automatic startup when system reboots):
sudo systemctl enable mongodb

Start / Stop a service:
sudo systemctl start mongodb
sudo systemctl stop mongodb

Further helpful tweaks for systemd on this page. See also this former post about upstart and systemv runlevels.

Radieschensalat

Ein leckerer Sommersalat als Beilage oder zwischendurch

Wohin mit den Radieschen? Dieser Salat ist eine leckere Option, wenn man ein paar Radieschen übrig hat. Und die Zubereitung ist denkbar einfach.

Zutaten für zwei Personen

  1. Ein Bund Radieschen
  2. Eine gute Handvoll Petersilie
  3. Geschmacksneutrales Öl, z.B. Rapsöl
  4. Essig, z.B. Balsamico
  5. Salz

Zubereitung

  1. Radieschen waschen und kleinschneiden.
  2. Petersilie waschen und fein wiegen.
  3. Radieschen und Petersilie in einer Schüssel vermengen.
  4. 2 EL Öl dazu.
  5. 1-2 TL Essig dazu.
  6. Gut salzen und vermengen.

Neben dem beliebten Balsamico können auch gerne andere Essige verwendet werden, z.B. Weinessig.

Petersilie, Öl und Salz nehmen den Radieschen die Schärfe, das Essig setzt einen geschmachlichen Kontrapunkt, darf aber nicht dominieren – daher vorsichtig dosieren.

Der Salat darf gerne eine Stunde durchziehen, muss aber nicht.

Determining Affected Tests Using Coverage Info

If have got an idea inspired by this post.

Every time when me or a build server runs a complete build on a commit with all tests, it could store detailed coverage information and associate it with this commit.

Later on, when for instance I am fixing a bug, it should be possible to git diff the working directory to the latest parent commit that has associated coverage info. Looking up the changes and the coverage info, one could tell what tests are likely to be affected by the changes in between.

If I then run only those tests, I am pretty confident that other tests will not break by my changes. I am not 100% sure but I am confident enough to commit the stuff and let the build server run a complete build which leads to new coverage info associated with this new commit.

Some requirements I have in mind:

  • Should work as a maven goal “mvn affected:test”
  • Should work with git to provide diff and coverage info association.
  • Should work with remotely stored coverage info. (Is it possible to store custom information associated with a commit into git repository?)
  • Should work with cobertura

I would really like to know, if I am the only one that thinks such a maven plugin would be useful, what obstacles you see, if there are better alternatives to cobertura and git or if a broader approach would make more sense. Let me know.

Tool Dilemma

If you write enterprise-size software in Java, you ran sooner or later into a dilemma regarding tools.

On one hand you know that using IDEs like Eclipse drives efficiency and adds support for streamlined workflows. IDEs provide handy features like call hierarchy and integrated debugging.

On the other hand IDEs tend to introduce questionable artefacts into the overall build infrastructure, and sometimes lead decisions for questions regarding architecture.

If you e.g. use maven as your dependency and build infrastructure then those IDEs have to completely understand your maven declarations to know about the classpath, output folders, compiler level etc. You tend to avoid maven features that your favorite IDE does not understand.

If you always follow standard solutions and best practises, there might be no big issue between IDE and maven build. But sooner or later there will appear challenges for the IDE such as cross compiling, multi-projects, generated code.

In such situations it is recommended to take a step back and analyze the overall requirements for architecture and build infrastructure. Why do we have this generated code, where is it used, what does the generation depend on, when should it executed? As a result, you should have created a reasonable model of all build steps and an acyclic module dependency model. Often the root of all evil lies in unmanaged architecture and build infrastructure. Fix this and go ahead.

Step number two should reflect the findings from step one into the actual dependency declarations and build infrastructure. A valid result of this step is a working command line build.

And then it is the last step that the IDE is tweaked to support the desired architecture. Sometimes, during this step, build infrastructure needs to be extended or even changed a little bit, but never should the changes go so far as to change the desired architecture. If the IDE is not able to cover the target architecture then you should propable rethink if this IDE is a good choice or if your requirements are too ambitious.

Bye Bye, Cygwin

At work, we use windows computers, and I am currently in the transition from one computer to the next one. This process typically takes at least a month, until I am confident enough that I have not missed something.

Today, I realized that I will not install cygwin on the new computer, because… Well, I have not used it for ages. During the last years, I’ve always run at least one Virtual Machine with Ubuntu on my Windows host, that gives me a fully fledged Linux environment with access to the file system of the host.

Before that time I used to reinstall cygwin and transferred settings and scripts until everything worked again.

But today I have no real use case to do so. The pain in a mixed environment (file name encoding, line endings, etc.) is the same for both techniques. I also have no use for DOS scripts using Unix tools compiled for DOS.

So all in all I would like to say Good Bye to cygwin and thanks for all the fish.

A Replacement for Truecrypt

When I realized that Truecrypt will no longer be developed I looked around for a proper replacement.

In the end I wrote a set of scripts to ease the mounting of AES encrypted file containers under Linux. Let’s say this is a command line truecrypt replacement.

Check it out here.

How to Disable Daemons Under Ubuntu 12.04 and Above

Ubuntu uses the upstart system to start several services. Every service controlled via upstart has a .conf file in the directory /etc/init, so you can list the available services using ls -l /etc/init/*.conf

To disable a service, upstart supports a “manual” configuration element, that can be placed either in the <service>.conf file or in an overriding file <service>.override using sudo sh -c “echo ‘manual’ > /etc/init/SERVICE.override”

To reenable the service just delete the .override file.

Not every service is controlled via upstart, on my system e.g. nessus is controlled via init.d. Services controlled via init.d are configured using an executable script in /etc/init.d and links to this script, that are located in /etc/rc?.d, according to the certain runlevels. To disable such a service, the easiest option is to remove the executable flag from the script, such as

sudo chmod -x /etc/init.d/nessusd

To revert your decision, just add the executable bit again:

sudo chmod +x /etc/init.d/nessusd

Blog on GitHub Using Octopress

Today, I have stumbled upon a blog that uses Octopress. It looked very clean, and I read something about Octopress and it’s concepts.

It seemed similar to my current blog setup, meaning that it produces static pages out of the sources, that can then be transferred to the web server and be served very fast and easily.

Octopress also offers easy integration with Github Pages. Github Pages is a hosting service of Github that is coupled to a GIT repository on GitHub. Just push changes to the GIT repository, and within seconds the changes are reflected on the respective site.

My repository is located here, and the site can be viewed here.

A cool concept octopress makes use of is to keep sources and the generated results in the same repository by using branches. Sources are committed to branch “source”, while the result is committed to “master. The two branches contain conceptually completely different artifacts, and they will never get merged.

The whole setup is really easy to make and it is a joy to work with it.

How My Other Site Was Built

When planning my other blogging site I have had some ideas what I want to achieve. I wanted a typical blog site but with extra flexibility for trying out other stuff and learning about different tools I am interested in.

I have had a special setup in mind: hosted on my own web server, no dynamic / interpreted content on the server side, just static pages. For sure the pages need to be generated by a flexible enough mini CMS.

So instead of going on with blogger.com I wanted to control my own web server and build it from ground up. But I also had a very limiting requirement: no cost. Yes, I want to try out stuff without paying anything more for it.

So the first thing to decide was about the web server. Luckily, Amazon currently offers a year of free testing their hosting services. I setup a server on AWS with the latest Debian and read about the service details that are included in Amazons free offer. Two things to keep in mind: First, you have absolutely no guarantee on backups being made nor failsafety etc. So it is vital to have your setup elsewhere in case your server is being reset to factory defaults. I chose to prepare everything on my laptop and rsync stuff in case of changes. More on that later. Second, you have one fix IP address (they call it Elastic IP) to be bound on your instances. In case you would like to have a meaningful web server name using a dynamic DNS hoster, you can register the Elastic IP there and give it a go. But a more flexible approach is to register your IP automatically when your interface is up using a script in /etc/network/if-up.d like so: $$code(lang=bash, style=monokai)

PubIP=$(curl http://169.254.169.254/latest/meta-data/public-ipv4)
curl --user ${USER}:${PASSWD} https://www.dnsdynamic.org/api/?hostname=${HOST}&myip=${PubIP}

$$/code Note: This works if you define USER, PASSWD and HOST before.

The result of this setup was an always-on server reachable in the internet using a meaningful name. Next step was to decide which http server to use. I wanted a minimalistic setup, but after reading a while here and there I learned that there are no good alternatives to Apache or Nginx. Since I am experienced with Apache I went with Nginx to learn something new. Well, Nginx seems to be a bit easier to setup, but in the end it is just working like Apache.

OK, server is running, what about the CMS? After researching a while I went with Blogofile which calls itself a “static website compiler”. That was exactly what I was looking for and I installed it on my laptop.

Blogofile gives you for each site a basic setup including features like Blogs etc. It is Python based (but it can be used without Python knowledge). It uses Mako Templates as a template engine which makes it easy to change layout and reorganize stuff. For blog posts it supports markdown which I find handy to use.

As a result, whenever I would like to add a blog post I have to add a markdown file in the posts directory and redeploy the site. For redeployment I am using the following script: $$code(lang=bash, style=monokai)

blogofile build -s ${site}
rsync -avz -e ssh ${site}/_site/ ${sshhost}:/var/www/

$$/code Works like a charm.

If ever the AWS server dies, I would have to reinstall and configure Nginx and the dyndns script, done. If ever my laptop dies, I would be screwed. Setting up programs like Blogofile is no fun but possible, but the valuable content would be lost. Therefore I commit my stuff to Github.

For me, this is a fun project. I have learned about EC2 on amazon, setting up and securing Debian, writing shell scripts, Nginx, Blogofile, a little bit Python, and Git on Github. And it gives enough room to play around.

Avoid Big Bug Backlogs

I strongly recommend to avoid long lists of bugs. Today, we have cool tools like Bugzilla or Jira that make tracking bugs real fun even if there are big numbers of it.

I can imagine a situation where a software product has no known bugs and is complete in a functional sense. However, I have never been in this situation since I have started working on software. The typical situation I experience is:

  • you have got plenty of things that should be improved, extended, added – more than you can implement.

  • you have got plenty of bug tickets, more than you can handle.

If you are in a situation where you sell software you are developing you also have to weigh effort, priorities, expected damage caused by bugs and expected gain from improvements. You cannot implement what you want. So how about filling lists of features, lists of bugs, prioritize them, estimate them roughly, and finally decide what is the next important item that should be implemented? It sounds promising. You only have to manage one list that is sorted by a function depending on effort, estimate, priority etc. and sort new items into this list.

Feed the first four items of this ultimate list to a Kanban-Board every day and let the developers pick items in order.

Bullshit.

In my experience you cannot prioritize bugs and features into one list. You simply cannot compare them. Maybe a prioritized backlog is good for features, but it is not for bugs. Bugs are not a thing of planning or betting on. Bugs are the bill you get for your past work. You simply cannot avoid to pay that bill.

I recommend the following to process bugs:

  • If a bug comes in: Decide if it is really a bug or possibly an enhancement (enhancement –> feature backlog).

  • Decide if you want to fix the bug. The decision should only take into account the level of quality you want to deliver, not what other things you have on your agenda.

  • If the bug is not important enough – and this is the important point – close it as “won’t fix”.

  • Otherwise choose between three severities: Fix it “now”, “today or tomorrow”, “within the next two weeks”.

  • Assign the bug to a person that should fix it actually.

  • Measure time spent on bug fixing. If it goes beyond 15 percent take a sharp look what is happening with your software.

So why not simply give non important bugs a low priority and keep them in the list? Why not say: “Hey, if one has nothing to do: just pick a bug from the “non-important” pool.“ For me this would be a statement like “We provide a quality level that depends on the time that is left over”. It is also unclear if this sort of bugs should be part of a “open bugs” statistic. More: In the unlikely case that someone would have time left over: What bug would she select? How would she choose from the list of 100k bugs? So just close these bugs and enjoy the pleasures of the real, lean bug list.