Distributed system design patterns

December 27, 2017

How to organize multi-container application

Sidecar pattern

Let’s start with a single node. It’s a common practice to seperate concerns also on the container levels. One container could act as a static content server, and the other could perform computations. They both exist on the same node and both have access to the same resources - although you may allocate different CPU and memory consumption.

Another popular scenario is to have a seperate container that handles logs of the other container as both share the same disk volume. This pattern is called Sidecar pattern.

If you use Kubernetes, you probably have already created one pod with multiple containers:

    apiVersion: v1
    kind: Pod
    metadata:
    name: two-containers
    spec:

    restartPolicy: Never

    volumes:
    - name: shared-data
        emptyDir: {}

    containers:

    - name: nginx-container
        image: nginx
        volumeMounts:
        - name: shared-data
        mountPath: /usr/share/nginx/html

    - name: debian-container
        image: debian
        volumeMounts:
        - name: shared-data
        mountPath: /pod-data
        command: ["/bin/sh"]
        args: ["-c", "echo Hello from the debian container > /pod-data/index.html"]

Adapter pattern

You probably heard of the Adapter pattern in programming, a similar solution exists in containers world. Let’s say you have many applications running, each on a different container operating on a different data format but you would like to export their data in one unified format. This could be easily done by a dedicated container that is accepting mulitple inputs and exposing them as a one output.

It’s very handy if you refactor your code, and you need to keep the old applications running but you also need them to deliver data in a new format.

Ambassador (proxy) pattern

Ambassador is like the Adapter container but working in the opposite direction. Sometimes you may need to simplify the communication of your containers with the outside world. An example of the outside world could be a multiple nodes of a database but your containers could talk to them via a proxy that represents a single point of communication. Your containers simply have to worry only about that single connection.

Multi-node application patterns

With a group of nodes it’s the same problem like with a group of people. Who’s going to lead? And the solutions are very similar to human world.

Leader election pattern

No big political campaigns, just a set of leader containers is required. Each of the leader containers is ready to take over lead and the result is presented as a one API that proxies access to the lead. Election happens between the leader containers. It is more complex that it sounds.

Scatter/gather pattern

It’s the fan out/fan in known in the data aggregation patterns. A single request is processed by the root node that fans out the request to different nodes to perform partial computations in parralel and collects the results (fans in) back into a single response. It’s very popular pattern in search engines.

Work queue pattern

RabbitMQ sounds familier? This concept basically distributes workload asynchronously to a set of distinct processes.

I recommend to have a get familiar with a concept of Jobs in Kubernetes here. You will find there a complete example.


Source:

[2] Google research archives

[3] Leader election