New challenges in software development for modern multi-core systems

I am currently applying at different universities for a master in computer science. Everybody who has applied to a university knows that this is a horribly annoying thing to do. Every university has a different web form; most additionally want documents in paper; and some think clicking through a million different web sites in order to apply for the correct courses is still not enough effort to show that you really want to study at this specific university. The technical university of munich (TU München) for example asks candidates to write a 1000 words essay about one of four topics. Here is something I learned writing this essay: There is nothing more enjoyable than typing 1000 words about multi-core systems into your phone while sitting at a beach in Cuba.

The essay is written in German and talks about “New challenges in software development for modern multi-core systems”.

 

Gordon Moore beschrieb in seiner Arbeit „Cramming more components onto integrated circuits“ bereits 1965 eine Gesetzmäßigkeit, die heutzutage als „Moore’s Law“ bekannt ist. Moore’s Law besagt, dass sich die Anzahl der integrierten Schaltkreise pro Flächeneinheit alle zwölf Monate verdoppeln werde. Moore selbst korrigierte seine Aussage zehn Jahre später und sagte voraus, dass der Zeitraum für die Verdoppelung etwa 24 Monate betragen würde. Bis vor wenigen Monaten, als große Chiphersteller wie Intel und AMD zukünftige strategische Entwicklungen präsentierten, galt Moore’s Law als selbsterfüllende Prophezeiung.
Bis in die frühen 2000er Jahre verfolgten Computernutzer ein Wettrennen der Taktraten einzelner Prozessoren. Die Hersteller warben mit immer höheren Zahlen, die zuletzt bis zu mehreren Gigahertz reichten. Moore’s Law ließ sich jedoch nicht durch das simple Skalieren der Taktraten erfüllen. Physikalische Grenzen, wie beispielsweise große Abwärme der Prozessoren, machten das weitere Skalieren unrentabel.
Ab 2006 begannen Mehrkernprozessoren den PC-Markt zu dominieren. Diese Mikrochips mit mehreren Hauptprozessorkernen erlaubten es, bestimmte Anwendungen zu parallelisieren und somit die Leistung der Recheneinheit zu erhöhen, während die Taktraten der einzelnen Prozessorkerne gering gehalten und somit Auffälligkeiten gegenüber physikalischen Effekten verringert wurden.
Mit der Einführung der zunächst Zwei-, später Vier- oder Achtkernprozessoren ergaben sich nicht nur für die Elektronik neue Herausforderungen. Software, die von einem Prozessor mit hoher Taktrate profitiert hatte, war nicht zwangsläufig für den Mehrkernbetrieb geeignet. Einige Algorithmen, wie beispielsweise Quicksort, profitierten von der Möglichkeit, mehrere Berechnungen parallel durchzuführen. Andere Algorithmen wie beispielsweise das Gauß-Seidel-Verfahren waren konzeptionell nicht in der Lage, eine parallele Berechnung durchzuführen und hatten somit einen entscheidenden Nachteil. Die Parallelisierbarkeit wurde ein neues Kriterium für die Effizienz eines Algorithmus.
Neben der Möglichkeit, die Effizienz von Algorithmen zu beeinflussen, stellten sich der Informatik auch weitere Herausforderungen. Das Philosophenproblem beschreibt ein Fallbeispiel der theoretischen Informatik über das Thema Verklemmung, welches Edsger W. Dijkstra bereits 1971 formulierte. Hierbei sitzen fünf Philosophen an einem Tisch mit fünf Nudelgerichten und fünf Gabeln. Die Philosophen können entweder Philosophieren oder Essen, jedoch nicht beides gleichzeitig. Zum Philosophieren benötigen sie keine Gabel, zum Essen jedoch zwei. Will ein Philosoph essen, so versucht er, sich zwei Gabeln zu nehmen und wartet, falls ihm keine zwei Gabeln zur Verfügung stehen. Beschließen alle fünf Philosophen gleichzeitig, dass sie zu essen beginnen wollen, kann es passieren, dass sich jeder Philosoph genau eine Gabel nimmt und bemerkt, dass er auf die zweite Gabel warten muss. Zu diesem Zeitpunkt sind alle Philosophen in einem niemals endenden wartenden Zustand, der im Allgemeinen als „Verklemmung“ bezeichnet wird.
Die Verklemmung ist nur eine der Herausforderungen, die durch den Einsatz von Mehrkernsystemen entstehen. Das Philosophenproblem beschreibt gleichzeitig ein Problem gemeinsam genutzter Ressourcen.
Versuchen mehrere Prozesse oder Threads auf dieselben Ressourcen zuzugreifen, muss der Zugriff synchronisiert werden. Ohne eine Synchronisierung kann es zu Problemen wie dem “dirty-read” kommen. Hierbei versucht ein Prozess einen im Speicher liegenden Wert zu lesen, während ein anderer Prozess diesen Wert ändert. Unter bestimmten Umständen kann es hierbei dazu kommen, dass ein Prozessor den Prozesskontext ändert, nachdem der im Speicher liegende Wert lediglich zum Teil verändert wurde. Dann versucht der zweite Prozess, diesen zum Teil veränderten Wert zu lesen. Dies führt in den meisten Fällen zu undefiniertem Verhalten. Das Manipulieren von gemeinsam genutzten Daten sollte daher nicht durch eine Kontextänderung unterbrochen werden (atomare Operation). Alternativ sollte der lesende Prozess auf die vollständige Fertigstellung der Manipulation warten. Dies wird in der Regel durch Mutexe oder Semaphoren bewerkstelligt. Hierbei ist zu beachten, dass es durch unoptimierte Prozesssynchronisation zu großen Leistungseinbußen kommen kann.
IBM stellt mit der Z13 Serie ihrer Mainframesysteme heutzutage eines der größten Mehrkernsysteme der Welt her. Bis zu 168 Prozessoren mit einer Taktrate von bis zu 5 Gigahertz können in diesen Industriemaschinen parallel rechnen. Diese millionenschweren Rechner werden von Großunternehmen wie der Fiducia AG verwendet, um Großkunden hochkarätige I/O-bezogene Anwendungen möglichst effizient bereitstellen zu können. Andere Unternehmen benutzen für solche Anwendungen eine andere Strategie. Große Internetkonzerne wie Facebook und Google können ihre Kunden nicht mit einem oder wenigen Großrechnern mit ihrer Software versorgen. Stattdessen wird die Rechenlast nicht auf mehrere Prozessorkerne, sondern mehrere Rechner verlagert. Durch geschickte Lastverteilung können Kundenanfragen hochparallel und für den Kunden transparent auf Rechnern auf der ganzen Welt verarbeitet werden. Dies bietet einerseits den Vorteil, dass die Kosten pro Recheneinheit deutlich reduziert werden, wirft jedoch andererseits neue Probleme bei der Synchronisation unterschiedlicher Berechnungen auf. Traditionelle SQL-basierte relationale Datenbanksysteme sind für diese Anwendung meist ungeeignet. Stattdessen wird vermehrt auf verteilte NoSQL-Systeme wie beispielsweise MongoDB gesetzt.
Durch das Umsatteln von Großrechnern auf verteilte Systeme können hohe Kosten gespart werden. Zusätzlich bietet diese Lösung in vielen Fällen andere Vorteile, wie eine erhöhte Redundanz. Häufig genutzte Ressourcen können auf mehreren Rechnern vorhanden sein um Verklemmungen und anderen Synchronisationsproblemen vorzubeugen. Dennoch lassen sich nicht alle Probleme durch Neuverteilung der Rechenlast lösen. Das Cachen von Daten stellt hierbei beispielsweise ein Synchronisationsproblem aller verteilten Systeme dar, welches bis heute nicht vollständig gelöst werden konnte. Auch in Zukunft wird es durch Veränderung der Kundenanforderungen immer weitere Aufgaben geben, die die Informatik zu lösen hat.

C++ Object slicing in exceptions

Hey guys,

The problem

take a quick look at the following code:

src2img

What will it print?

I have to say that I was quite suprised when I saw the output:

output.PNG

Of course my expectation was that it would simply print “test”. However I investigated a little and it turns out that this is an effect of Object Slicing.

The explanation

Let’s go a bit deeper into detail. The following code snippet describes the effect of Object Slicing.

Code2

By assigning a B-Object to an object of its superclass A, you lose all additional information that the child class B provides. The object test now is an object of the class A. Therefore it will behave like an A-Object. Interestingly Resharper C++ has a warning for this:

Code3.png

 

The specific solution

In the problem explained above the problem is rather simple. The exception is caught by reference and therefore an object slicing assignment is implicitly performed. The direct solution for this problem is to simply catch the exception by reference instead of by value.

Code4.png

output2.PNG

The general solution

Is is difficult to find a general solution for object slicing. A tool like resharper is good to detect simple cases in which this might be a problem. It did however not detect the above problem with exceptions. Generally an assignment to a superclass should always be done by using references or pointers.

As always this code can be found on my GitHub.  Thanks for reading.

Using an extreme feedback device

I am a techie. I like cool technical gadgets. A garage door that opens when I tell Siri to do so? Neat! But sometimes, just sometimes, these cool gadgets can actually improve productivity.

An extreme feedback device (XFD) is a device that is supposed to create an unavoidable feedback to a certain action. In software developement it is generally used as feedback for continious integration systems (CI).

Why do we need that?

When we push a commit into our repository, our CI starts building it and sends an email containing all warnings and errors of the build to every developer. An email like this is usually ignored, especially because there are so many of them. So sometimes a warning passes unnoticed for a day or two until is becomes hard to spot it.

What is needed is immediate, unavoidable feedback. Or in other words: Extreme Feedback.

The blog of Softwareschneiderei contains an interesting article about extreme feedback devices and how to use them. It gave me an my collegue Klaus-Martin Reichert an idea to implement our own.

The idea

Well our idea was simple. Use a stripe of RGB-LEDs that glow in different colors depending on the status of our repository. A stripe is supposed to be attached below the monitor of every developer.

For starters the LEDs would have four states:

  • Off:  The build is fine. No need to draw attention away from programming.
  • Blue: The CI is currently building the newest revision.
  • Yellow: There have been warnings in the last build.
  • Red: There have been errors in the last build.

Of course this can easily be expanded or altered.

The output device

To reduce the implementation effort we looked around for a USB-Controlled-RGB-LED-Stripe and found one. BlinkStick offers a broad selection of exactly the  kind of LEDs we were looking for. Specifically the BlinkStick-Strip was interesting for our cause, since it can easily be attached to the bottom of a monitor. Including shipping costs we paid about 25€ each. The price sounded reasonable.

2016-02-04 13.02.53

The BlinkStick-Strip offers 8 RGB-LEDs which can be controlled individually using an open source API that is available for 13 languages including Python, .NET and C. Perfect.

Details on the implementation will be explained in the next blog post.

ezgif.com-add-text

Day to day life

Someone commits changes, the LEDs starts to glow blue. The build finishes and one of three things happens: They flash green and turn off, they turn yellow or they turn red.

In my experience if they turn yellow after I pushed a commit it means that my collegue is about to come into my office with a big grin on his face saying “What did you do again?”. It is inevitable to notice the result of the build process since our boths monitors literally glow yellow if there is a warning.

This may sound like it creates a lot of pressure not to make any mistakes, but that’s not what I’m trying to say. Quite the opposite actually. It forces us to perform small code reviews on every unsuccessful build, which more than once yielded in improvements of the produced code.

Conclusion

An extreme feedback device is a great tool to visualize CI results. It noticably increased the cohesion of our development team. It is cheap, easy to implement and overall a great addition for every developement team.

Defining a custom iterator in C++

Iterators are great. They brought simplicity into C++ STL containers and introduced for_each loops. But what if none of the STL containers is the right fit for your problem? Creating a custom container is easy. But should you really give up for_each loops? By defining a custom iterator for your custom container, you can have the best of both worlds!

 

First stop: Defining a custom container

For the sake of simplicity, let’s make a really simple container. It contains three compile-time fixed elements with no accessors or similar. Additionally it has a method that retrieves the container size.

class CustomContainer
{
public:
   CustomContainer() = default;
   ~CustomContainer() = default;

   size_t size() const;
   
private:
   int field1 = 1;
   int field2 = 2;
   int field3 = 3;
   size_t nSize = 3;
};

Easy as that. It’s our goal to iterate over the three given fields.

Declaring an output iterator-class

Again, for the sake of simplicity, we will implement an output_iterator, which requires no possibility to modify the underlaying container. In order to find out mor about different categories of iterators I strongly recommend reading the C++ Reference website.

The (IMHO) easiest way to implement a custom iterator, is to inherit from the abstract std::iterator class and to specify its template parameters. So let’s take a look at them:

template<class Category, class T, class Distance = ptrdiff_t, class Pointer = T*, class Reference = T&>

The only really complicated parameter is the first one: Category. As explained on the C++ reference website liked above, there are different categories of iterators. We’re implementing an output_iterator. Therefore our category will be “std::output_iterator_tag”.
The second parameter defines the class of objects we’re iterating over. In an array this would be the type of objects stored in it. In our case it is simply “int”. The other three parameters are usually insignificant. “Distance” describes the type in which the distance of two elements inside the conainer is measured and is almost exclusivly ptrdiff_t (usually unsigned 32/64bit int). So our resulting class declaration turns out to be:

class iterator : public std::iterator<std::output_iterator_tag, int>

Well that was painless.

Declaring all necessary methods

Let’s take a look inside the C++ reference website, shall we? The following table is copied directly from the given link:

iterators4.png

The methods an output_iterator needs to define, therefore are: Dereferencing, prefix-increment and postfix-increment. So let’s do exactly that:

class iterator : public std::iterator<std::output_iterator_tag, int>
{
public:
    int operator*() const;
    iterator & operator++();
    iterator operator++(int);
};

By the way: In case you’re as surprised as I was, looking at the second definition of the “++”-operator. This is the correct (and AFAIK only?) way of defining a postfix “++”-operator. Hm. Who would have thought that.

Perfect. We declared an absolutely valid output_iterator-class for our custom container. Let’s implement the methods and we’re done.

Implementing the custom iterator

Looking back at our custom container, we somehow need to point to one of three fields. Easiest way is to define an index, which is saved inside the iterator. Additionally the iterator, which is implemented as an inner class of the custom container, needs to access these fields. In Java an inner class has implicit access to its containing outer class. In C++ you need to pass a valid reference to it.
So our current code looks something like this:

class iterator : public std::iterator<std::output_iterator_tag, int>
{
public:
    explicit iterator(CustomContainer & Container, size_t index = 0);
    int operator*() const;
    iterator & operator++();
    iterator operator++(int);
private:
    size_t nIndex = 0;
    CustomContainer & Container;
};

The increment operators simply increment “nIndex” and return a reference to themselves. The dereferencing-operator however is somewhat more interesting.

int CustomContainer::iterator::operator*() const
{
   switch (nIndex)
   {
   case 0:
      return Container.field1;
   case 1:
      return Container.field2;
   case 2:
      return Container.field3;
   default:
      throw std::out_of_range("Out of Range Exception!");
   }
}

I think the above code is self explaining. Now we’re done, aren’t we? Let’s try it out!

CLion outputs the following error:

CppCustomIterators\main.cpp:8:16: error: no matching function for call to ‘begin(CustomContainer&)’

Of course. Like all containers, ours needs a begin() and an end()-method!
That’s their simple implementation:

CustomContainer::iterator CustomContainer::begin()
{
   return CustomContainer::iterator(*this, 0);
}
CustomContainer::iterator CustomContainer::end()
{
   return CustomContainer::iterator(*this, size());
}

But still Microsoft is not happy:

CppCustomIterators/main.cpp:8: undefined reference to `CustomContainer::iterator::operator!=(CustomContainer::iterator const&) const’

An “!=”-operator is required for the loop to be able to check whether the current iterator has reached the end of the container. Well that makes sense, got to give him that.

bool CustomContainer::iterator::operator!=(const iterator & rhs) const
{
   return nIndex != rhs.nIndex;
}

There you go.

#include <stdio.h>
#include "CustomContainer.h"

int main(int argc, const char* argv[])
{
   CustomContainer customContainer;

   for (auto it = customContainer.begin(); it != customContainer.end(); it++)
   {
      printf("Iterators: %d\n", *it);
   }

   for (auto i : customContainer)
   {
      printf("Ranged-for: %d\n", i);
   }

   getchar();
}

Unbenannt

Everything works as expected.

I hope this simple article helps someone, who’s had a hard time understanding custom iterators. As always the whole code can be found on my GitHub.

Thanks for reading!

Non-const copy constructors

In my last post I tried to show the general use of move-semantics. While writing it, I figured “Wait a minute, couldn’t I just use a non-const copy-constructor instead?”.

As can be seen in the following code snippet, yes it does work. Perfectly.

copy1
So why all the hazzle with these weird new move-semantics then? Short anser: principle of least astonishment (POLA).
Long anser: Consider the following piece of code:

Copy2Given the above definition, this code will crash. A regular programmer will say “Huh? But I just copied one image and multiplied with itself!”. And here we are. There is a reason we call it “copy constructor” – we expect it to copy. not to move.

Code should do exactly what expected and only what is expected.

The beauty of move-semantics

Hey guys,

as you may know me and my team are currently working on an in-house image processing library written in C++. In this library our base-class obviously is an image. For the sake of simplicity, let’s call it “Image”. The following code, shows you a simplified structure of our image-class.
move1
An image contains a buffer, which can be allocated and freed. The buffer is identified using an int and is handled by some abstract handler. Buffers are automatically freed during the distruction of the image object.
As you can see (in this example), the only way to manipulate an image is to call the “Arith()”-method. It allows to perform mathematical operations on image pairs. The following piece of code shows such manipulation using addition and multiplication.
move2

As you can see, the call to “Arith()” is rather ugly. You have to allocate a matching result image first, then do a cryptic call to a method that appears to be doing a lot more than one thing. Why all the hazzle? The intuitive solution to multiply two images would be to use the *-operator. Take a look at the following code:

move3

This code is a lot cleaner and a lot more readable. Now operator overloading is not a new thing. So what’s the big deal? Take a look at the following implementation of the +-operator.

move4
Looks good, doesn’t it? Doesn’t it? Sadly it crashes with a “BufferNotAllocatedException”. Why is that? Well, take a look at the destructor of CImage.

move6
When you try to return a local CImage object from a function and store it outside of the function, the implicit copy-construtor will be called. This will happily copy the stored buffer-id and then destroy the temporary local object in the function. When the local object is destroyed, it requests the image buffer with the given ID to be freed. Therefore the new copy outside of the function will point to an invalid buffer.

How do we get around that? Well that’s not so easy. You could pass a pointer to an Image instead of the image itself, but this will take away a lot of the beauty of operator overloading. Luckily C++11 introduced a little something called “move-semantics”. The implicitly-defined move constructor will perform a move-call on every member. This results in copying built-in types and move-calls to objects.

Now as we can see in our first code snippet, the implicitly-defined move constructor may not be what we want. It will copy our “m_nBufferId” resulting in said “BufferNotAllocatedException”. Instead we have to define our own move constructor (and while we’re at it, we might as well define a move-assignment operator).

move5

Where does that put us? Well… Combined with the previously defined operators the answer is rather simple. By using the move constructors, moved objects will get their buffer id invalidated and will therefore not attempt to free their buffer and cause an exception. Additionally no big image buffers are copied and freed at the end of a function. Instead only few primitives as the buffer id need to be copied.

TL;DR:
Move constructors can improve code readability and performance by a LOT.

You can find the working code example in my Github: https://github.com/LorToso/CppMoveSemantics

Crying doesn’t help

Let’s be honest. As a developer you don’t see too many different things. I sit in my office, infront of my (actually really hand full of) monitors, hammer around on my keyboard and get a big paycheck at the end of the month. Still somehow the things I see on these monitors manage to make me feel all sorts of different things. I am satisfied when the tests run through successfully, I laugh when I read things on /r/ProgrammerHumor/, I am depressed when there seems no nice solution to a problem.

And sometimes code just makes me sad.

As I’ve mentioned in previous posts, I’m currently working on a large code base of about 400k lines of code, that have mostly been coded by students. In this code base you will find every code smell in existence and a few smells that have never been smelled before. A quote that describes the code base pretty well came from a colleague of mine:

We’re not being paid to maintain the system. We’re being paid to make it work.

There is no time, there is no money and for many people there is no reason to introduce industry standard procedures into our research environment.

Unit tests? Nah.. Run it on two or three records and if the results look fine, you can commit it.

After working here for a while you become a little numb to what you see every day. Yesterday however I saw a snippet of code, that would end up ending my day early. CryingOnTheInsideFor clarification m_sScanDirection is a CString. It represents a direction in which images were recorded. In this short snippet we see it saved in three different ways:

  • An integer
  • A fake enumeration using defines (which leads to the integer)
  • A string

I scanned through our project. I begged it wasn’t that bad. I begged that this parameter wasn’t used too often. A single tear rushed down my face as I saw over a hundred usages of m_sScanDirection. A parameter that can only have three different states was passed as a string parsed upon every usage.

I packed my things and left.

Today I came back to the same mess and struggled whether I should just leave everything as it is and swallow my pride, or  clean it up and finally turn it to an enum. Eventually I had to think of the Boy Scout Rule and ended up refactoring old code for two hours. The following image shows one exmaple usage of the refactored code.

CryingOnTheInside2

The result is really simple and rather clean. Of course the “cleanest” approach would be to use polymorphism, but being pragmatic, I think that would be an overkill.

In conclusion:
Using a string to represent a state is a very very bad idea. It is slow, cannot be checked by the compiler and is absolutely unreadable from a code-style perspective. Prefer enums or polymorphism.

TL;DR:
I saw bad code, I cried, I fixed. Don’t use strings to represent a state. Use enums!

Book review: A Brief History Of Time

Is it weird that my first book review is about a physics book instead of a book about computer science? Maybe. I think it is really important to look into different sciences in order to learn their approaches on solving problems. (And to be honest I thought it would look really cool to lie at the beach and read Stephen Hawking).

Stephen Hawking gives a short overview of the entire world of physics in chronological order. He explains everything from Isaac Newtons discovery of Gravity to his own work in the field of Black Holes. As someone who was always interested in physics and graduated from school doing my Abitur (German High School Diploma) in physics, this book felt like a reiteration of my complete knowledge of physics including everything from classical physics as Newton described it to modern physics based on Einsteins theory of general relativity.

Of course the book does not go into detail (after all it is a brief history), but explains how a physical theory is created based on observation and either proven or simply assumed to be true. These assumptions make up most of todays theoretical physics.

I find it fascinating how people spend all their lifes thinking about problems of theories that are after all based on assumptions. Not my world though. The book is really interesting and easily readable for people without a physics diploma (like me). It lacks details in rather important parts of the book (like the theory of general relativity or paulis exclusion principle).

I’ll give it a 7/10, because after reading the book I put it down and it made me think about it for a good hour. Isn’t that what a good book is supposed to do?

Why you should not use hungarian notation in modern programming

Okay, first of all a little backstory: I work on a ~15 year old C++ project (oldest code I’ve seen was from 1998). In our project we try to use hungarian notation. As a member of the team I swallow my pride and use that notation aswell.

We recently upgraded to Visual Studio 2015 and got a couple hundred new warnings of “local variables shadowing class members”. Looking at the code, most warnings were correct. But more importantly it got us to look at very old code.
Some local variables were named with the prefix “m_” indicating that they were member variables. Some floats had the prefix “n” indicating integer variables.

Let’s take a look at the most important parts of the hungarian notation and why I think it’s unnecessary.

Prefix USAGE Example Comment
m_ Member variable m_size This is by far the most common prefix in our source code.
n Number variable (usually int or long) m_nRowCount
f Floating point variable (usually float or double) fDistance
b Boolean variable bIsChecked
ctrl Usually used for GUI-Control elements ctrlEditBox I have actually never seen this outside of our code base.

Lets take a look at this variable:
m_unRwCnt
As the name suggests, this variable is an member variable of the type unsigned integer and represents the number of rows in some container. Now the question I’m asking is: Does this name say that much less?
RowCount
Well it’s a count of rows… What sense does half a row make? It’s obviously some sort of integer value. Does a negative amount of row make sense? No? Then it must be unsigned aswell! There are two more pieces of information left. Its scope and its length (integer or long). The scope of a variable should be visible by a glance at the code. As Robert C. Martin suggests, methods should be as short as possible in order to satisfy the Single Responsability Principle (SRP). Therefore it is never unclear whether a variable is local or comes from a greater scope. The length of the variable? What do I care? (Okay I’ll admit, im some rare cases e.g. pointer casting, this might be an important bit of information.)

Now lets look at the options a modern IDE offers us. When hovering over a variable we get all information about its scope and its type. So where does that put us? We can always get type and scope information of a variable when we are unsure and we don’t even have to click. And more importantly, when we don’t need it, it doesn’t clutter the code.

TL;DR:
The hungarian notation clutters the code and does not add any information the IDE doesn’t already offer. Meaningful names clear out any doubts about the scope of a variable.

Why this site exists

Hey guys,
well it was back in 2012 during a job interview that I was first asked whether I have a website. My answer was “No, why would I need one?”. Until recently I still felt that way. And yet today I’m writing this.

What changed?
Well… I’m done with my bachelor’s, I have 1.5 years of work experience and it’s now been more than 5 years since I’ve first hit a big green “COMPILE”-button. I’ve come across a variety of problems, which I have mainly solved mainly through the internet. I think It’s time to give something back.

I hope you guys find some interesting piece of knowledge on this webiste. Maybe a book review, maybe a solution to a problem I’ve come across, maybe just my opinion on a critical topic. Enjoy.