Sunday, January 8, 2017

Docker containers with DNS and overlapping ports


~ Problem ~

If we only have a few Docker containers on our local dev machine, then it is not too cumbersome to (re)use the domain-name "localhost" and ports 80/5432/3006/... 

But when we start running several different projects' containers at the same time we quickly find ourselves juggling non-standard ports or stopping one project to start another.

This is because Docker, by default, will bind port-mappings to IP 0.0.0.0, which means listening on the given port on all network interfaces.


This results in the computer not being able to re-use the same port for another application. I.e., if we are running a container mapped to 0.0.0.0:1234, then we cannot start another container on port 1234 (on any interface).


Solution ~

We want to run many containers (web servers, databases, etc.) - and allow them to be mapped to the same ports. We also want to (optionally) assign domain names to our containers. We want a solution that works in any local application, with any ports/protocols, without configuring socks-proxy, port-forwarding or ssh-tunneling.

Give each container (or project) it's own IP

The easiest way to allow for overlapping ports is to give each container port-mapping its own IP-address. But how many IP-addresses do you have pointing to your machine? Say hello to 127/8.


In fact, all addresses from 127.0.0.1 to 127.255.255.254 point to your local machine.

So, let's bind our port-mappings to a suiting local IP address. Docker allows up to do this by prefixing with the IP-address it should bind to on the host. The port syntax is the same in docker run and docker-compose.

In this example the address-space 127.10.x.x is used for containers. Note that the same IP can be re-used as long as there are no overlapping ports. This is useful for a database + web database administration combo.




Make a neat tree of domain names 

Now since our containers have their own IP, we can start using DNS to access them in a human-friendly way. The easiest way is to make a sub-tree in a public DNS we control.


As an alternative to buying/managing our public DNS (only ~ 20€/year, https://eurodns.com), we can instead use free wildcard DNS services like nip.io or xip.io. They respond with the IP provided in the domain-query.


Or, if we feel like taking the blue pill, we can run our own DNS in a container.
http://www.damagehead.com/blog/2015/04/28/deploying-a-dns-server-using-docker/
With this approach we can conjure a whole TLD for our containers, ex: www.projectname.jode

Summary ~

This solution (tested on Windows and Linux) takes away port-mapping dilemmas and introduces the ability to use DNS. 

A few things to note:
  1. If any other application listens to 0.0.0.0:80, then it will not be possible to listen to ex. 127.10.2.1:80. A common gotcha is that Skype (Preferences > Advanced > Connection) does this, which can be disabled.
  2. Using 127.x.x.x addresses only works locally. Even though the domain-names look very shareable, they are not.
  3. To expand this pattern to work between multiple machines, the host needs to have a (virtual) interface for each external IP, and the network routing for the range needs to be set-up accordingly.


Sunday, October 19, 2014

Learn HTTPS by becoming your own CA

Chain of trust

HTTPS depends on a chain of trust, and each step in the chain consists of a certificate that has been signed by its parent certificate. Usually the chain is 2~3 nodes long. The first node in the chain is one of around 50+ Certificate Authorities (CA) certificates configured to be trusted by your computer (and basically all computers world-wide). The last node in the chain is the certificate identifying the domain name of the HTTPS website you are visiting.

Looking at https://www.google.se for example, the chain of trust starts from GeoTrust Global CA, which is already trusted by most computers by default. They then signed Google Internet Authority G2, and specifically allowing Google to sign other certificates on the behalf of GeoTrust. Google then signed a certificate with a Common Name (CN) of *.google.se, which matches the domain name(s) this certificate represents.



Certificate

A certificate is basically a public key, bundled with some key-values (like an expiration-date and, most importantly, a Common Name, CN), and appended with a hash of these things encrypted with the private key that signed the certificate.



In many cases the Common Name in a certificate is the name of a company (like GeoTrust), but for the website certificate (last-node in the chain) the CN must match the domain name (as seen above for *.google.se, which is a wildcard match for www.google.se among others). Wildcard certificates are convenient since they can be reused for any domain names that matches.

Each certificate has a corresponding private and public key.



Certificate sign request (CSR)

Signing a certificate requires two peers (for example: you and GeoTrust) to co-operate. Each of the two peers has their own private key they need to keep secret. You want to send your public key and some key-values (such as the domain name in Common Name (CN)). To do this you create a Certificate sign request (CSR), which is basically a certificate with a missing signature.



When GeoTrust gets your CSR they can do some research to see that you actually own that domain name (usually by sending a verification e-mail to hostmaster@your-domain.com). They can also look up your company in a public directory and/or give you a phone call. When satisfied, they add some key-values such as valid from, valid to and purpose of certificate. (For example, the certificate Google Internet Authority has an extra purpose which gives it right to sign other certificates - but the *.google.se does not.) After adding their signature by (hashing and encrypting that hash with their private key) they send you back a complete certificate.

Let's play Pretend Certificate Authority

First install OpenSSL and Nginx. Put openssl on path and open a terminal to the Nginx conf folder.

1.1) Generate your CA private RSA key (root.key)

openssl genrsa -out root.key 2048

Here the key will be of size 2048 bits (default: 512). And the private key will not be password protected. (Password protection only benefits security if you have a password that can resist trillions of brute-force attempts per second, assuming you don't store it in plain-text in your server config which you probably would do anyway :).

1.2) Generate your CA certificate sign request (root_req.csr)


openssl req -new -key root.key -out root_req.csr -sha256

You now need to fill in some info, but the most visible in the Common Name (CN).
I wrote Johan Deckmar as my CA Common Name. After completing all the steps, this is what the browser will show:

(Final result)

1.3) Self-sign the root_req.csr with the root private key to get the CA root certificate (root.crt)

openssl x509 -req -days 365 -in root_req.csr -signkey root.key -out root.crt -sha256

Alright. Half way there, almost. Currently the root.key needs to be kept private. And the root.crt is a public certificate that you want people to trust as a root CA certificate. If you are doing this at your company, it's the root.crt that needs to be installed as a root CA cert on the employees computers.

Now, add the root.crt as a trusted CA root cert on your computer.

In Windows: Double-click on it --> Install certificate --> Next --> Select destination: Trusted root certificate authorities --> Next --> Complete.

For managing certs in Windows in general: Start --> Run: certmgr.msc

2.1) Generate your web-server private RSA key (server.key)


openssl genrsa -out server.key 2048

2.2) Generate your web-server certificate sign request (server_req.csr)


openssl req -new -key server.key -out server_req.csr -sha256

As before, fill in some info about your company and web-servrer. NOTE: Set the Common Name (CN) to a domain name (or wildcard) matching your webserver (where you have Nginx). If you are just trying this on your local computer, you can use the following DNS which points to your local IP-address:  localhost.deckmar.net


2.3) Sign your web-server CSR with CA private key to make the cert (server.crt)


openssl x509 -req -in server_req.csr -CA root.crt -CAkey root.key -set_serial 02 -out server.crt -days 365 -sha256

2.4) Concatenate your web-server cert and the root CA cert to form a chain (server.pem)


Soon when your web-server (Nginx) identifies itself with an HTTPS certificate, it should send the whole chain of certificates starting from the web-server cert and finishing with the root CA cert. This might sound complicated, but is actually done by concatinating the certificate files into one file like this:

cat server.crt root.crt > server.pem

The file-ending doesn't actually matter, but I use the file name server.pem for a file with multiple certificates. (PEM is a format which can contain one or more certificates concatenated after each other).

3) Configure and start Nginx


Now open conf/nginx.conf and go down and un-comment the configurations under # HTTPS server.

I made the following three changes in nginx.conf:

ssl_certificate      ../secure/server.pem;
ssl_certificate_key  ../secure/server.key;

ssl_session_cache    none;

The path to the .pem and .key file is relative to the conf folder, so as you can see I put my keys and certs in a new folder called secure in the nginx folder.

Start Nginx. Assuming everything was done correctly you should now get a green lock when you open:
https://localhost.deckmar.net

Since I used mainframe.deckmar.net as the domain name, this is what I see when inspecting the certificate on my server (click the green lock --> certificate information).


In some cases you need to restart your browser completely to use the latest certificate settings on your computer.

Let me know how if goes if you try this out yourself. Especially if you find some mistake in the commands or something missing/wrong in the explanations.

Cheers!

Sunday, August 5, 2012

One-keyboard-button compile your Google Drive-hosted Latex project

Context

First off, you already have your report on Google Drive and have made a compile-script, right? (See earlier post of how to set that up). 

Binding F2 to run the script

Let's say that "C:\thesis\build.bat" is the script that will download and compile your report. If we can bind one keyboard button to run that script (regardless of currently focused application) it could speed things up.

Here's how to beat Windows into running a .bat file by a key-press:

  1. Right-click on "Cmd" in the start-menu and "Send to taskbar" (making a shortcut among the Windows 7 speed-launch-icons)
  2. Shift-right-click the new cmd-icon in the taskbar and press Properties
  3. In "target" box append: /C "C:\thesis\build.bat"
    It should now say %windir%\system32\cmd.exe /C "C:\thesis\\build.bat" in the target-box. 
  4. In the keyboard-shortcut field select with mouse and press (for example) F2.
  5. [Ok]
Now pressing F2 at any time brings up a console windows with your build-script flashing by like a black-and-white Matrix scene, and the disappears. This is great for typing in the Google Drive document and having your pdf open on the other monitor in SumatraPDF which auto-refreshes the PDF when the file is updated.

Thursday, August 2, 2012

Auto-export URL references to Bibtex correctly in Medeley

The problem

Auto-exporting a bibtex file in Mendeley is nice and awsome when writing in Latex, but @misc URL references get their url in a \url tag, not in \howpublished{\url{...}} - which results in the URL not showing up in the References there when compiling the paper.

Solution

In Tools - Options - Document Details - [Web page], check in "Medium" (this is mapped to \howpublished in the exported Bibtex). (Tips: check in Citation Key also to control the \cite key)

Now, in your URL references, write, for example, "Available: http://www.example...." in the "Medium" field in the details pane of your reference. Now the URL shows up correctly in your paper (and is clickable)!


Thursday, March 8, 2012

Write LaTeX in Google Docs, compile locally to pdf

I want to write my MS thesis in LaTeX code, but also from anywhere via Google Docs.

This is how I automatically fetch and compile LaTeX code from Google Docs:

  1. Create the Google Docs document
  2. Put some LaTeX code in it. Hopefully you have some "boilerplate" code you always start with.
  3. (I assume you have a LaTeX compiler installed and know how to write in LaTeX)
  4. Install wget if you don't have it (Windows:  http://gnuwin32.sourceforge.net/packages/wget.htm, Other:  http://ftp.gnu.org/gnu/wget/)
  5. Press Share in your doc and allow anyone with the link to view it:
     
  6. Save. Copy the public link. Change the "/edit" in the end to only "/export?format=txt".
  7. Make a folder on your computer where you will be doing the LaTeX compilation.
  8. Make a script which downloads the doc as raw text file and then compiles it with the latex compiler.

This is how my download-and-compile script looks like:

"path/do/wget" "https://docs.google.com/doc..." -O thesis.tex --no-check-certificate
"path/to/pdflatex.exe" -interaction=nonstopmode "thesis.tex"

Those two lines are basically all it takes. When I run build_thesis.bat it downloads the very latest code and compiles it into a pdf.

Notes
  • Problems with the UTF-8 BOM sneaking in to the beginning? Try using lualatex.exe instead which supports UTF-8.
  • If you want to split up into many .tex files you can make many docs and wget all of them.
  • If you use images it's easiest to simply have them in your working folder.
  • Images in the Google Doc will be ignored in the raw text, so you can have your images in the doc and the code for LaTeX to compile them in for a nice preview.
  • A ToC in the doc follow as text in raw format, so it's best not to use it and instead rely on scrolling.
Tell me your results. Happy LaTeXing!

/Johan

Monday, August 8, 2011

SSH med RSA-nycklar


Översikt

För att börja använda RSA-nycklar (istället för)/(tilsammans med) lösenord för SSH behöver du först göra två saker:
  1. Generera ett nyckel-par med PuTTYgen
  2. Klistra in din publika nyckel på servern i $HOME/.ssh/authorized_keys
  3. Lägg in din privata nyckel i PuTTY och spara anslutningen så du slipper göra om det 

Guide för Windows

Se till att du har både PuTTY och PuTTYgenLadda ner här.
Öppna PuTTYgen och generera ett nyckel-par med som är minst 1024 bitar.

Lägg sedan ditt namn i kommentaren, t.ex "rsa-key-John", och (om du vill) ett lösenord som måste skrivas för att nyckeln ska kunna användas.
Varför ha ett lösenord på nyckeln?


Om en hacker får tag på din privata nyckel måste han/hon först brute-force:a RSA-nyckelns lösenord för att kunna logga in på server. Det ger tid för en administratör att upptäcka läckan och blockera den/de läckta nycklarnas tillgång till servern.

Spara både den publika och privata nyckeln på din dator. All säkerhet hänger på att din privata nyckel hålls hemlig. Kopiera den text som är markerad i bildan ovanför. Det är din publika nyckel som du ska lägga in på servern.
Hantering av din privata nyckel.


Några riktlinjer:
  • Lägg inte nyckeln i Dropbox
  • Skicka inte nyckeln i e-post
  • Ge inte din nyckel till någon annan - generera istället en ny
  • Använd ett USB-minne om du behöver flytta en privat nyckel
  • Vill du ha en nyckel hemma också? Generera en ny och maila den publika nyckeln så du kan lägga in den från kontoret

Sista steget: Kopiera din publika nyckel som står i rutan "Public key for pasting into..", och klistra in den i $HOME/.ssh/authorized_keys på en egen rad.

Lägg in din privata nyckel i PuTTY och spara anslutningen i Session så du slipper göra om det.



Det anses vara helt säkert att ha en publik SSH-server som endast accepterar RSA-nyckel-inloggning - förutsatt att du håller dina privata nycklar privata. Därför bör du definitivt stänga av lösenords-inloggning i /etc/ssh/sshd_config så att alla brute-force attacker blir meningslösa.

Sunday, November 7, 2010

My opinion on the future structure of the web

With XHTML Strict we have separated content and design. Using CSS we have been able to remove much redundancy in the code delivered to browsers. We basically define snippets of design and apply them on an arbitrary amount of elements.

I believe the next step is to separate data and structure, by defining snippets of structure and applying to on arbitrary amount of data.

This can be easily achieved now using jQuery Templates, which at the time of writing is in beta.

Example:

Imagine the comment-section on YouTube or this blog. Each comment might have an image, a timestamp, username, thumbs-up/down and some wrapping. All this XHTML code is repeated for every comment.

A short comment

Using a short YouTube comment as an example, the XHTML code consists of 721 bytes of data (which has been generated in a loop on the server side and transmitted to the browser). Removing all XHTML code from the example comment - leaving only the actual information needed - yields 48 bytes of data. In other words: The XHTML code contained 6.657% information, the rest being redundant structure code which is repeated for each comment.

Visual representation:

Code of the short comment:

<li data-author-viewing="False" data-id="5mm3YzP7OX3U298MLeHm0HolNSkstTdEdfbrhWTEAnI" data-score="0" data-author="SuperWizTech" data-pending="0" data-blocked="False" data-flagged="False" data-removed="False" class="comment current"> <div class="metadata"> <div> <a class="author" href="/user/SuperWizTech" title="SuperWizTech">SuperWizTech</a> </div> <div> <span class="time">4 months ago</span> </div> </div> <div class="content"> <div class="comment-text" dir="ltr"> <p>completly awsome? :D</p> </div> <div class="metadata-inline"> <a class="author" href="/user/SuperWizTech">SuperWizTech</a> <span class="time">4 months ago</span> </div> </div> </li>

A long comment

The same comparison on a (relatively) long comment yields an information-to-structure-code-ratio of 453 / 1518 = 29.84%

Visual representation

Code of a long comment

<li data-author-viewing="False" data-id="5mm3YzP7OX2-iXk_QNRtPTEnt5jOC_kWffObtTXjW24" data-score="4" data-author="jpsieben7" data-pending="0" data-blocked="False" data-flagged="False" data-removed="False" class="comment current"> <div class="metadata"> <div> <a class="author" href="/user/jpsieben7" title="jpsieben7">jpsieben7</a> </div> <div> <span class="time">2 years ago</span> <span class="comments-rating-positive">4 <img class="master-sprite comments-rating-thumbs-up" src="http://s.ytimg.com/yt/img/pixel-vfl3z5WfW.gif"></span> </div> </div> <div class="content"> <div class="comment-text" dir="ltr"> <p>what you could do is get a rechargeable battery pack and 2 nxts. then have the base hooked up to the wall so it automatically keeps charging and then have the base use the light sensore and send out a strobe which the bartender finds and? goes and gets. a little bit more complicated but would be a bit more acurate and quick. also make the base have a gravity fed drink loader in it so when 1 is taken another takes its place.</p> </div> <div class="metadata-inline"> <a class="author" href="/user/jpsieben7">jpsieben7</a> <span class="time">2 years ago</span> <span class="comments-rating-positive">4 <img class="master-sprite comments-rating-thumbs-up" src="http://s.ytimg.com/yt/img/pixel-vfl3z5WfW.gif"></span> </div> </div> </li>

Summary

Assuming (educated guesses):

  • Average comment text size: 200 bytes (justified for some Unicode encoding)
  • Additional unique data per comment: 50 bytes (such as: index; special class-names)
  • Data structure overhead per comment: 30 bytes (using JSON)

If we were able to represent the data (username, comment-text and timestamp) with said overhead and have its structure defined only once, the comment section on YouTube would only use less than 25% (280 / 1265 = 22.134% in this example) of it's current bandwidth for the comment section for each [page view]/[comment page-flip].

Simply put, separating structure from data on the web would first of all reduce many chunks of redundant code in repetitive sections (such as in comments- and search-result-sections). Secondly the data would - by design - be set free from their client-specific (in this case client = web-browser) delivery.

One benefit from such a separation would be the possibility of easily creating a completely different interaction to the internet service on a mobile client, without the need of making a new API for this.