Blog

  • Using quotes in remote commands

    In my previous tutorial I showed how to escape quotes in Bash and PowerShell commands. So far so good. Now what if you want to execute a Bash command via SSH from a PowerShell instance?

    Let’s start with the plain Bash command we want to execute:

    echo $'Hello \' world!'

    This executed on Bash includes a single quote in the string. Please note the three single quotes in the command. Now to execute ssh on PowerShell on Windows we can use the following command:

    PS> ssh ago@server-0003 'YOUR_COMMAND_GOES_HERE'

    I spared the actual command in this line to show how this needs to “treated” in order to be executed remotely. Now we know from the previous tutorial that each single quote needs to be replaced with two single quotes in order to be recognized as string character and not as delimiting character. For the shown Bash command this would mean this:

    echo $''Hello \'' world!''

    Now we can replace “YOUR_COMMAND_GOES_HERE” with the treated command and this would be this:

    ssh ago@server-0003 'echo $''Hello\''world!'''
    Green: PowerShell, Red: Bash, Selected: Gets replaced with ‘ by PowerShell

    I hope this helps somebody.

  • Escaping in shells

    First things first: A string with spaces is enclosed in either single quotes ‘ or double quotes “. The difference you ask? Single quotes are treated “as is”: No variable replacement, no parsing, just a string. If you want to embed a variable value, you have to concatenate the string. Period. Double quotes allow dynamic values via variables:

    NAME="Andreas"
    echo "The name is $NAME"
    # Output: The name is Andreas

    This would print my name in the echo line.

    Okay, so now what if you need a quote in a echo line? Exactly, the same that is used to initiate and terminate the string? Well, you have to escape it. Let me make this clear: We are still working on a default Linux Bash now. Double quotes are relatively easy:

    echo "A string with a \" is fine."
    # Output: A string with a " is fine.

    Same goes for variables:

    NAME="Name with a \" character."
    echo "Value: $NAME"
    # Output: Value: Name with a " character.

    Single quotes are a little more tricky:

    echo $'String with a \' in it.'
    # Output: String with a ' in it.

    And as a variable:

    NAME=$'Name with a \' in it.'
    echo 'Value: '$NAME
    # Output: Value: Name with a ' in it.

    Okay, so now you can have quotes in strings that are framed by the same quotes. This is the beginning and I’d start practicing this for bash. I will continue this tutorial with PowerShell embedding:

    Write-Host "A string containing a `" character."
    # Output: A string containing a " character.

    As you can see, the escaping character is different from bash: PowerShell uses a ` to escape newline (`n) or quotes (`”).

    As we all know, MS is known for their consistency. Single quotes are treated differently:

    Write-Host 'A string containing a '' character.'
    # Output: A string containing a ' character.

    Isn’t that awesome?

    So why am I explaining this?

    Try to imagine the following scenario: You are working on a Windows-Machine (with PowerShell) and would like to run a command via SSH (basically a Linux tool) on a remote Linux server that runs BaSH. I’d suggest working from bottom (the command on the Linux server) to top (the command being written in the PowerShell environment). First execute it on Bash, then via SSH and finally embed it in the PowerShell command. This way you always know on which level you are and which escapes you are going to need.

    I will continue this tutorials because this always gave me a headache. Especially considering variables parsed on the executing host, the remote machine or coming from a command line. One has to work very concentrated here and always visualize where a certain part of a command is executed.

  • Check if remote database exists

    Escaping in PowerShell or in any other scripting language is not that much fun to begin with. So if you have quotes in quotes on different shells in different systems it becomes really funny. So trust me, I am more than surprised that I just found a neat way to check, if a database exists on a remote host with PowerShell (on Windows) and SSH (to a Linux machine):

    [string]::IsNullOrEmpty((ssh root@your.hostname.com 'mysql -N -e $''SELECT SCHEMA_NAME FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = \''databasename\'';'''))

    Execute this with a replaced host- and database name in PowerShell and it tells you “False” if your database exists or “True” if not because it is checked if the string is null or empty. Maybe just flip the boolean value in the end 🙂🙃

  • Unify line endings

    Let’s assume you have a software project and the line endings of the files are inconsistent. Meaning, that git commits and pushes without proper configuration will result in a merge disaster because the merge tools recognize the whole file as changed and not just the lines actually changed. When working on Windows or on a system that supports PowerShell you can use the following piece of code to change the line endings of all files in your project folder:

    Get-ChildItem -File -Recurse | ForEach-Object {
        try {
            $Content = Get-Content -Raw -Path $_.FullName
            $Content = $Content.Replace("`r`n", "`n")
            Set-Content -Path $_.FullName -Value $Content -NoNewline
        }
        catch {
            Write-Host -Object "Error handling file $($_.FullName)"
        }
    }

    This will load the content of each file, replace “\r\n” (Windows line endings) with “\n (Linux line endings) and write the content of the file into the same file it was read from.

  • How to keep a clean local development environment

    As a developer I have to work with a lot of tools, development environments and frameworks and I want to

    • keep them in the same spot,
    • know what I work with, especially with which version,
    • keep old versions for compatibility reasons and
    • have a simple way of updating stuff without the need to change a lot of shortcuts.

    While using Visual Studio Code I found out that the tool is installed in

    %LocalAppData%\Programs\Microsoft VS Code

    when installing in user mode (without UAC permissions / as non admin). I adopted this idea for the tools I use on a daily basis:

    Now I am installing everything in the folder

    %LocalAppData%\Programs

    and add a environment variable. For example:

    Environment variables

    These are then added to the %PATH% variable to make the executables available on command line:

    This way if I have to install a new version of a application I just copy it into a new folder, change the environment variable, restart the shell and I am using the new version. Something is wrong with the new version? Just change the variable back and the old version is used. Then I can continue until a patch is available.

    The same principle can be applied to server environments. For example if a “Azure DevOps” build agent is used that is executed as specific user for this purpose. Then the tools can be installed solely for this user and the setup does not interfere with potential other services running on this machine. Same goes for shared development instances. Sure, tools used by everybody can be installed globally, but each developer has his own preferences and this setup supports this.

    I am using this principle now for

    • IDEs like Eclipse, IntelliJ and Netbeans
    • Development kits like Java, Msys2, Python and dotnet
    • Build environments like Gradle, Maven and Ant
    • Tools like Keystore Explorer, KeePass and others

    Adding the stuff to the path variable is optional of course. For example I do not want gcc available on every command line because when compiling ESP32 applications the regular gcc compiler would be the wrong one and both compilers are called “gcc.exe”. Fortunately tools like VS Code support environment variables in their configuration files:

    This would be %ProjectDir%\.vscode\c_cpp_properties.json for example:

    This way I can configure the used compiler locally for each and every project.

  • Archive, compress and send

    You want to archive (tar), compress (gzip) and send (scp) a directory in one command? With Linux, it is possible:

    tar cf - yourdir | gzip -9 -c | ssh -l root yourhost.example.com 'cat > /path/to/output.tar.gz'

    What does it do?

    tar cf - yourdir creates a uncompressed archive and writes it to standard out (stdout).

    gzip -9 -c reads from standard in (stdin), compresses and writes to standard out.

    ssh -l root yourhost.example.com 'cat > /path/to/output.tar.gz' takes the content from stdin and writes it to the defined file on the remote host.

    And the crazy thing: It actually works. Weird shit.

  • Send passwords more secure

    Sometimes it is inevitable to send passwords and credentials to a counterpart who needs access to some kind of management tool because there is not extended and secure user management available.

    I always just sent the credentials with the login link. Now imagine what this means: A potential scanner or crawler could find my the credentials combined with the login link in one email or if the email gets accidentally forwarded there is also the full information attached including who sent it: Me.

    Now I started to use some sort of 2 factor communication:

    1. Write the credentials in a Zip file and save it with a password.
    2. Send the protected Zip file via Mail to the counterpart.
    3. Send the password over a different channel, e.g. WhatsApp, another mail, a phone call or a small piece of paper.
    4. Do not mention the original Mail with the Zip file in the password message.

    What does this mean?

    1. Somebody unauthorized would not know what the password is for.
    2. The password for the Zip file (sent via WhatsApp) is in a different system than the Zip file itself (sent via Mail), thus hacking one system does not bring anything.
    3. The password is not stored in plain text and does not get multiplied by every mail forward.
    4. This method adds another layer of security because now two systems would need to be compromised.
    5. It pushes others to also use this system and acts as a reminder to not just forward mails with passwords but to also use the second communication channel because one would have to rewrite or at least edit the mail with the password in order to get everything into one insecure mail.

    I got on this topic last time when I received credentials via mail and the password via WhatsApp. In the first moment I took this for granted and realized only a few days later that this is pretty genius. It is not the most secure or absolute perfect solution but it does add a layer of security to communication and this is what security is about for me.

  • Example for automating a time consuming manual process

    My recent entries were quite technical and were meant to solve very specific problems or show how to overcome certain issues on the last mile that are very annoying in a per se well defined concept. Today I’d like to give a overview from a higher level to put my other posts into perspective.

    The premise: You are a software/service provider

    Imagine you have a program or software you want to offer to your customer but the customer, often technically inexperienced, should not have to deal with servers, providers, DNS entries, certificates, maintenance, the cloud and all the technical stuff that comes with this. He should just fill out a form and afterwards it “should just work”.

    The start: A manual process

    As soon as the order comes in a manual process takes place on the provider side:

    • Checking the provided data (are the VAT number and contact details correct, is it a real customer, etc.)
    • Create the virtual machine. Each customer gets one because problems on one machine should not affect the others.
    • Install and configure the software.
    • Setup DNS entries so that the page can be found by others (i.e. use “myshopname.com” instead of “172.16.47.11”)
    • Acquire certificates for state of the art secure communication. Under certain circumstances this has to be repeated every three month.
    • Register the page with tags in search engines.
    • Integrate the server into a monitoring environment so that you can call the customer that a problem has been solved instead of the customer calling you that he has a problem.

    As you can see: There are quite a lot of steps that need to be done and all starts with a potential customer filling out a form. Now customers are volatile and if this takes longer than the “accepted amount of time” the customer will not use your service but maybe the competitors. I don’t have to go into detail what this means. The technical process on the other hand takes a certain amount of time and it has to be done by a person. While this person is doing it all new requests that come in during the process have to wait and get scheduled back which increases the duration even more. This means this is not very effective. Thus it should only be done during development of an automated process: Fist get something running, then make it better and more efficient. The next part will describe how such a process could look like.

    The goal: Automation

    The final goal is a process where the only thing indicating that a new customer has subscribed to the service is a payment at the end of the month but just sitting down and saying “I am going to do this!” is like standing in front of a mountain and trying to take it all in one step: Impossible. What is possible is automating and testing small parts on the way and putting them together to get a automated process.

    First let’s analyze the mentioned manual steps:

    As you can see there are several layers and “partners” where to get information from or where to send information to. Often these partners offer APIs (interfaces working with machine language that can be easily accessed from a program). So now I’d estimate where I could get which information from which leads to this annotated diagram:

    Annotated process

    Now I would have a basic understanding what needs to be done and where I could get my information from. This steps then translate directly into stages of two pipelines (automated and via trigger launchable processes). The first trigger (for the validation pipeline) would be the customer filling out the form and sending it and the second trigger (for the setup pipeline) would be the customer clicking on the confirmation link in the email he gets from the system.

    Pipeline view of the processes

    And in the end the customer would get a notification mail with generated access credentials and when he opens the page for the first time he would have to change the password to his own.

    This is of course a massive oversimplification. In this process there are much more components involved: You need a application hosting the pipeline and listening to triggers, a server is required to provide software components, a monitoring server has to be set up and so on and so forth. Bringing everything into these diagrams would have been way too much for this example. What I wanted to show is that even very complex processes can be broken down to simple, manageable steps. Or as said in terms of programming: Divide and conquer

    Thanks for reading and as always: I hope this helps somebody out there.

    PS: If you want to modify the diagrams for yourself you can open the PNG files in diagrams.net just save them and open them on the page like this:

    Just select the PNG and it should open as editable diagram.

  • Zabbix agent deployment and configuration

    Installation and configuration of the Zabbix agent, the counterpart of the Zabbix server, that collects and sends data to the server to be shown, is quite easy and can be done with a simple bash script.

    #!/bin/bash
    # File: client.sh
    
    SERVER=$1
    HOSTNAME=$2
    
    # Zabbix version
    ZV=5.4
    
    # File name version
    FNV=${ZV}-1
    
    if [ -z $HOSTNAME ]
    then
            HOSTNAME=$(cat /etc/hostname)
    fi
    
    if [ -z $SERVER ]
    then
            SERVER=zabbix.example.com
    fi
    
    wget https://repo.zabbix.com/zabbix/${ZV}/ubuntu/pool/main/z/zabbix-release/zabbix-release_${FNV}+ubuntu20.04_all.deb
    dpkg -i zabbix-release_${FNV}+ubuntu20.04_all.deb
    apt update
    apt install -y zabbix-agent
    
    sed -i "s/^Server=127\.0\.0\.1$/Server=${SERVER}/g" /etc/zabbix/zabbix_agentd.conf
    sed -i "s/^Hostname=Zabbix server$/Hostname=${HOSTNAME}/g" /etc/zabbix/zabbix_agentd.conf
    
    systemctl restart zabbix-agent.service

    Now let’s imagine you have 1000 hosts and would have to do that on each and every one of them. That’s going to be a long day of work.

    My assumption here is, that you as the administrator have ssh root access to all these machines via a ssh public key and each host has a individual hostname (For example: vm1, vm2, vm3, …, vmN or GUIDs).

    If this is given the shown script can be transferred to the host via “scp” and executed via “ssh”. I will show this with 4 hosts but with a little bash magic this can be extended to a lot more.

    #!/bin/bash
    # File: server.sh
    
    SERVER=zabbix.example.com
    
    for HOST in vm1 vm2 vm3 vm4
    do
            scp client.sh root@$host:/root/client.sh
            ssh root@$host "chmod u+x ./client.sh; ./client.sh $SERVER $HOST; rm ./client.sh;"
    done

    In this script the server securely copies the file “client.sh” shown above to the hosts one by one and executes it to install and configure Zabbix agent. The agent is restarted afterwards and can be added to the Zabbix server. I am planing to write another tutorial on how to do this via API. This can then be integrated into the server script.

    The client script can also be used to add the Zabbix agent manually. Just check the current version and modify the script if necessary.

    As always: I hope this saves somebody some time and makes administration a little bit easier.

  • Parse JSON output from an API with Python

    As great as PowerShell is with JSON objects you can’t always use it because it is not available on each and every machine with Linux out there. And I don’t know a way of parsing JSON code with plain bash properly. Fortunately a lot of servers come with “python” installed and it can be used to parse JSON and retrieve values from the object with a comprehensible notation.

    For example, this is the response from a API call to create a new DNS record in a self managed zone:

    {
        "record": {
            "id": "4711",
            "type": "A",
            "name": "testentry",
            "value": "0.0.0.0",
            "ttl": 3600,
            "zone_id": "2",
            "created": "1970-01-01 00:00:00.00 +0000 UTC",
            "modified": "1970-01-01 00:00:00.00 +0000 UTC"
        }
    }

    What I would need from this call is the record id. To show how this works, I have to explain and extend the example: Such a response comes most of the times when calling an API via “curl” for example. It is not just lying around on the disk. This in mind let’s assume that the shown JSON code is the result of the following call:

    curl -s https://exampleapi.goa.systems/path/to/endpoint

    Now curl just puts out the response on stdout and we can pipe it in the following command and read it there via stdin:

    curl -s https://exampleapi.goa.systems/path/to/endpoint | python3 -c "import sys,json; print(json.load(sys.stdin)['record']['id'])"

    With this code the value of “id” (['record']['id'] – compare with the source JSON, in the example “4711”) is printed (println) on the command line. Now, to store it in a variable one could do this:

    $RECORDID=$(curl -s https://exampleapi.goa.systems/path/to/endpoint | python3 -c "import sys,json; print(json.load(sys.stdin)['record']['id'])")

    And then save it on the disk and use it for your purposes. In my case for example I’d use it for deleting the record later in a process I’ll describe in another tutorial.