Tuesday, May 24, 2016

JBoss Fuse: dynamic Blueprint files with JEXL

In this post I’ll show how to add a little bit of inline scripting in your Apache Aries Blueprint xml files.

I wouldn’t call it necessarely a best practice, but I have always had the idea that this capability might be usueful; probably I started wanting this when I was forced to use xml to simulate imperative programming structures like when using Apache Ant.

And I have found the idea validated in projects like Gradle or Vagrant where a full programming language is actually hiding in disguise, pretending to be a Domain Specific Languge or a surprisingly flexible configuration syntax.

I have talked in past about something similar, when showing how to use MVEL in JBoss Fuse.
This time I will limit myself to show how to use small snippets of code that can be inlined in your otherwise static xml files, trick that might turn useful in case you need to perform simple operations like replacement of strings, aritmetics or anything else but you want to avoid writing a java class for that.

Let me say that I’m not inventing anything new around here. I’m just showing how to use a functionality that has been provided directly by the Apache Aries project but that I haven’t used that often out there.

The goal is to allow you to write snippet like this:

...
  
       

...

You can see that we are invoking java.lang.String.replaceAll() method on the value of an environment variable.

We can do this thanks to the Apache Aries Bluerpint JEXL Evaluator, an extension to Apache Aries Blueprint, that implements a custom token processor that “extends” the base functionality of Aries Blueprint.

In this specific case, it does it, delegating the token interpolation to the project Apache JEXL.

JEXL, Java Expression Language, it’s just a library that exposes scripting capabilities to the java platorm. It’s not unique in what it does, since you could achieve the same with the native support for Javascript or with Groovy for instance. But we are going to use it since the integration with Blueprint has alredy been written, so we can use it straight away on our Apache Karaf or JBoss Fuse instance.

The following instructions have been verified on JBoss Fuse 6.2.1:

# install JEXL bundle
install -s mvn:org.apache.commons/commons-jexl/2.1.1 
# install JEXL Blueprint integration:
install -s mvn:org.apache.aries.blueprint/org.apache.aries.blueprint.jexl.evaluator/1.0.0

That was all the preparation that we needed, now we just need to use the correct XSD version, 1.2.0 in our Bluerpint file:

xmlns:ext="http://aries.apache.org/blueprint/xmlns/blueprint-ext/v1.2.0"

Done that, we can leverage the functionality in this way:



    

    
         
    
    
    
      
              
      
    

Copy that blueprint.xml directly into deploy/ folder, and you can check from Karaf shell that the dynamic invocation of those inline script has actually happened!

JBossFuse:karaf@root> ls (id blueprint.xml) | grep osgi.jndi.service.name
osgi.jndi.service.name = /OPT/RH/JBOSS-FUSE-6.2.1.REDHAT-107___3

This might turn useful in specific scenarios, when you look for a quick way to create dynamic configuration.

In case you might be interested into implementing your custom evaluator, this is the interface you need to provide an implementation of:

https://github.com/apache/aries/blob/trunk/blueprint/blueprint-core/src/main/java/org/apache/aries/blueprint/ext/evaluator/PropertyEvaluator.java

And this is an example of the service you need to expose to be able to refer it in your <property-placeholder> node:


    
        
    
    
    

Tuesday, April 26, 2016

Deploy and configure a local Docker caching proxy

Recently I was looking into caching for Docker layers downloading for the Fabric8 development environment, to allow me to trash the vms where my Docker daemon was running and still avoiding me to re-download basic images each single time I recreate my vm.

As usual, I tried to hit Google first, and was pointed to these couple of pages:

http://unpoucode.blogspot.it/2014/11/use-proxy-for-speeding-up-docker-images.html

and

https://blog.docker.com/2015/10/registry-proxy-cache-docker-open-source/

Since I use quite often Jerome Petazzoni’s approach for a transparent Squid + iptables to cache and sniff simple http traffic (usually http invocation from java programs), I’ve found that the first solution based made sense, so I have tried that first.

It turned out that the second link was what I was looking for; but I have still spent some good learning hours with the no longer working suggestions from the first one, learning the hard way that Squid doesn’t play that nice with Amazon’s Cloudfront CDN, used by Docker Hub. But I have to admit that it’s been fun.
Now I know how to forward calls to Squid to hit an intermediate interceptors that mangles with query params, headers and everything else.
I couldn’t find a working combination for Cloudfront but I am now probably able to reproduce the infamous Cats Proxy Prank. =)

Anyhow, as I was saying, what I was really looking for is that second link that shows you how to setup an intermediate Docker proxy, that your Docker daemon will try to hit, before defaulting to the usual Docker Hub public servers.

Almost everything that I needed was in that page, but I have found the information to be a little more cryptic that needed.

The main reason for that is because that example assumes I need security (TLS), which is not really my case since the proxy is completely local.

Additionally, it shows how to configure you Docker Registry using YAML configuration. Again, not really complex, but indeed more than needed.

Yes, because what you really need to bring up the simplest local (not secured) Docker proxy is this oneliner:

docker run -p 5000:5000 -d --restart=always --name registry   \
  -e REGISTRY_PROXY_REMOTEURL=http://registry-1.docker.io \
  registry:2

The interesting part here is that registry image, supports a smart alternative way to forward configuration to it, that saves you from passing it a YAML confguration file.

The idea, described here, is that if you follow a naming convention for the environment variables, that reflects the hierarchy of the YAML tree, you can turn something like:

...
proxy:
  remoteurl: http://registry-1.docker.io
...

That you should write in a .yaml file and pass to the process in this way:

docker run -d -p 5000:5000 --restart=always --name registry \
  -v `pwd`/config.yml:/etc/docker/registry/config.yml \
  registry:2

Into the much more conventient -e REGISTRY_PROXY_REMOTEURL=http://registry-1.docker.io runtime environment variable!

Let’s improve the example a little, so that we also pass our Docker proxy a non-volatile storage location for the cached layers, so that we are not losing them between invocations:

docker run -p 5000:5000 -d --restart=always --name registry   \
  -e REGISTRY_PROXY_REMOTEURL=http://registry-1.docker.io \
  -v /opt/shared/docker_registry_cache:/var/lib/registry \
  registry:2

Now we have everything we needed to save a good share of bandwidth, each time we need to get some Docker image that had already passed through our local proxy.

The only remaining bit is to tell our Docker daemon to be aware of the new proxy:

# update your docker daemon config, according to your distro
# content of my `/etc/sysconfig/docker` in Fedora 23
OPTIONS=" --registry-mirror=http://localhost:5000"

Reload (or restart) your Docker daemon and you are done! Just be aware that if you restart the daemon you might need also to re-start the Registry container, if you ware working on a single node.

An interesting discovery, it’s been learning that Docker daemon doesn’t break if it cannot find the specified registry-mirror. So you can add the configuration and forget about it, knowing that your interaction with Docker Hub will just benefit of possible hits your caching proxy, assuming it’s running.

You can see it working with the following tests:

docker logs -f registry

will log all the outgoing download requests, and once the set of requrests that compose a single pull image operation will be completed, you will also be able to check that the image is now completely served by your proxy with this invocation:

curl http://localhost:5000/v2/_catalog
# sample output
{"repositories":["drifting/true"]}

The article would be finished, but since I feel bad to show how to disable security on the internet, here’s also a very short and fully working and tested example of how to implement the same with TLS enabled:

# generate a self signed certificate; accept default for every value a part from Common Name where you have to put your box hostname
mkdir -p certs && openssl req  -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key  -x509 -days 365 -out certs/domain.crt

# copy to the locally trusted ones, steps for Fedora/Centos/RHEL
sudo cp certs/domain.crt /etc/pki/ca-trust/source/anchors/

# load the newly added certificate
sudo update-ca-trust enable

# run the registy using those keys that you have generated, mounting the files inside the container
docker run -p 5000:5000 --restart=always --name registry \
  -v `pwd`/certs:/certs \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  -e REGISTRY_PROXY_REMOTEURL=https://registry-1.docker.io \
  registry:2

# now you just need to remember that you are working in https, so you need to use that protocol in your docker daemon configuration, instead of plain http; also use that when you interact with the API in curl