Blog

  • Bulk modify PuTTY sessions

    The following script allows you to modify PuTTY sessions in a batch.

  • Theory vs. experience

    Question: “Do you understand?”
    Answer: “Yes!”
    Order: “Okay, then implement it.”
    Question: “How do I start?”

    And that is the difference between understanding something in theory and having experience doing it. There is a lot to it and in this article I’d like to cover this topic on the specific example of creating a repository for your work. I have done this a thousand times and I have a rough idea about naming schema, branching structure and where to put it for which case. What always annoys me is the arrogance and the view, that this is “just a simple task”. 10 different people will name the repositories in 10 different ways. The first with the dot annotation, the second one uses hyphens, the third one camel case, the forth numbers them and so on. This comes from the assumption that “I understand”. And that is just the naming.

    It doesn’t matter …

    “… it’s just a repository” is often the response when I want to discuss how to name it, what the general schema is and how to set it up correctly. The initial quote might even be correct … if … yeah, if you only have one repository, if you are working alone, if you don’t have to explain it to somebody and if you have a ton of other reasons. But how many successful developers do you know who work alone? See, a recognizable schema, names with a purpose and an easy to search through structure make life much easier. I want to give an example:

    “Go to room 80.”

    That may make sense if you have one office building with sequentially numbered rooms. But what if …

    • … the numbering starts at 1 on every floor and every floor has more than 80 rooms?
    • … there is a second office building with the same numbering schema?
    • … there are subsidiaries in other locations with the same schema?
    • … “Room 80” doesn’t refer to room 80 but to room 0 on the 8th floor?
    • … there are more than 10 floors and now you don’t know if, for example, room 180 refers to room 80 on the first or room 0 on the 18th floor.

    Context matters. People who spend a lot of time in an environment take it for granted, natural, logic and easy. But in reality, it is not. This is why it is important to talk, share information, discuss and define such things. And no, it is not just a repository. It is a system, a structure and if it is self-explanatory you need less trainings and thus less time and it is less frustrating. This all saves time and as we all know: Time is money.

  • I’m a hybrid morning person

    What is a hybrid morning person you ask? I love working and getting things done in the morning, but from the comfort of my bed with my laptop on my knees.

  • Cleanup Linux logs

    I just had the case that my disk on a Linux VM ran full and I had to make space quickly. I was willing to sacrifice my journald logs in order to make the urgent update work and I was able to free up a few gigabytes of space with the following commands.

    journalctl --rotate
    journalctl --vacuum-time=1s

    This drops all the logs and frees up space immediately.

  • Start Spring Boot with classpath and class

    There is an easy way to start a Spring Boot application by just calling

    java -jar MyApplication-0.0.1.jar

    This requires all dependencies to be bundled into the jar file and makes the jar file “fat”. That’s also the key word to search for when trying to create a complete jar.

    But there is another way.

    For example if I want to allow users to exchange the logging framework or allow the usage of different database drivers the dependencies can be exported into a separate directory using the following lines in the “build.gradle” build definition:

    task copyRuntimeLibs(type: Copy, group: 'build', description: 'Export dependencies.') {
    	into "build/deps"
    	from configurations.runtimeClasspath
    }

    Executing this task will copy all the runtime dependencies, like Spring Boot and transient libraries, into a folder called “build/deps”.

    I can then start my application from the project directory with the following command:

    java -cp "build/libs/*;build/deps/*" my.application.MainClass

    Hope this helps somebody.

  • Wait for host in BaSH

    Okay, this one is a little specific, but I recently had this issue and I wanted to share it because knowing this would have saved me one scripting language and a lot of time. I have previously implemented this in PowerShell, but it can easily be done in BaSH as well.

    The premise was to setup a new virtual machine and before I could work with the machine I had to wait until it was created and powered up. The installation was done in a pipeline and so there was no interaction. The following script waits until the machine is available and can be pinged and then terminates. This is not a finished solution but a small part of the pipeline that allowed me to continue with the regular installation via SSH after the machine was available.

  • Delete Docker image from registry

    I finally found out how I can delete images from my self hosted remote docker registry. Here is how:

    Prerequisites

    Make sure, that the docker registry supports deletion of images:

    user@host:~$ docker exec -it registry cat /etc/docker/registry/config.yml
    version: 0.1
    log:
      fields:
        service: registry
    storage:
      cache:
        blobdescriptor: inmemory
      filesystem:
        rootdirectory: /var/lib/registry
      delete:
        enabled: true
    http:
      addr: :5000
      headers:
        X-Content-Type-Options: [nosniff]
    health:
      storagedriver:
        enabled: true
        interval: 10s
        threshold: 3

    As you can see this registry allows deletion of images.

    HTTP API calls

    For the next step one needs the URL of the registry. In the example it is “my.docker.registry”. The image is “goa-systems-example” and the tag to be deleted is “0.0.1”.

    The codes starting with GET, HEAD and DELETE are HTTP requests that can be executed in a tool like Postman or can be scripted with PowerShell or Python for batch processing.

    These commands worked for me and I take no responsibility for accidental deletion. When in doubt consult the official Docker documentation. Please handle with care!

    1. Get repositories

    GET https://my.docker.registry/v2/_catalog
    

    returns

    {
        "repositories": [
            "goa-systems-example"
        ]
    }

    2. Get images

    GET https://my.docker.registry/v2/goa-systems-example/tags/list
    

    returns

    {
        "name": "goa-systems-example",
        "tags": [
            "0.0.1"
        ]
    }

    3. Get manifests for each tag

    This requires special header.

    HEAD https://my.docker.registry/v2/goa-systems-example/manifests/0.0.1
    Accept: application/vnd.docker.distribution.manifest.v2+json
    

    returns as HTTP header

    Docker-Content-Digest: sha256:4b2ac7d3aaa230f4070ff97f4c8bf7fdb6f86a172b2a2621e1aa9806b5e6b01c
    

    4. Delete image with digest

    DELETE https://my.docker.registry/v2/goa-systems-example/manifests/sha256:4b2ac7d3aaa230f4070ff97f4c8bf7fdb6f86a172b2a2621e1aa9806b5e6b01c

  • Use a build agent for multiple systems

    Assume you have a build agent that has a specific tool set installed. Let’s say certain compilers, build tools and configurations. Parts of the team work with Provider A (let’s say GitHub), others with Provider B (let’s say GitLab) and a third part works with the internal Git repositories and a locally installed Jenkins instance. The whole setup is running in a company network that is secured by a firewall and looks basically like the following diagram shows.

    The basic principle here is, that the agents connect to the service and thus from the network to the cloud provider and not the other way round. This way there is no route through the firewall required and as soon as the agent is up and running it connects to the cloud provider and appears as “online”.

    Advantages

    • All tools available and maintenance is only required once.
    • Each service runs as separate user and thus access to external entities can be managed (i.e. SSH access via public key)
    • Separate workspaces and access for analysis can be granted per team
    • Can be applied to Windows and Linux build agents
    • Per provider configuration is possible on local home directory usage (local toolset)
    • Scalable: Just clone the agent, change host name and connect to providers
    • Access to internal resources is possible for publishing and deployment for example

    Disadvantages

    • Single point of failure
    • Performance can be affected when all systems run builds at the same time.

    Challenges

    • Versioning for compilers when static compilers are required in the future (i.e. for firmware compilation on specific hardware) – Can be solved with Docker.

    Conclusion

    This article only scratches the surface of the topic but it shows that it is very easy to optimize usage of existing resources by connecting different cloud providers and thus allowing teams to work in their known environments. Additionally the ops team can manage the agents and keep the tools updated.

  • Prevent explorer from automatically restarting

    For certain actions it is required that the Windows Explorer is stopped in order to avoid open file handles or other actions that can be done while the Explorer is running. Windows is configured to restart the Explorer automatically when its process is terminated. I wrote a small PowerShell script that checks the current setting, disables it if necessary, kills the process and then restarts it and optionally restores the setting if it was set in the first place. You can find it here:

    This script has to be executed with administrator privileges as it changes values in the system wide part of the registry.

  • Automated Eclipse downloader

    I work with Eclipse a lot and to download a new version, which comes out 4 times a year, I use the following script. It can be executed with PowerShell and does not require Java to be installed on the system because it downloads the OpenJDK version itself. Eclipse can then be found in the folder “eclipse_out” and Java in “java_out”.

    Why this script? Because it also installs all the plugins I need. For example “Spring tool suite”, “SonarLint”, “Buildship” and many more.