Let’s consider this simple playbook:
---
- hosts: node0
vars:
foo: ''
tasks:
- set_fact: foo='foo'
- debug: var=foo
At first glance one would expect the foo variable to be set to the string value ‘foo’, but we get some strange result instead:
ok: [node0] => {
"foo": "VARIABLE IS NOT DEFINED!"
}
Did ansible interpret the quoted foo as the foo variable and decided to unset the variable because it couldn’t deal with the recursion? In some other cases like setting a variable to itself in a vars: dictionary, an infinite loop error is thrown, not here.
Commenting out the set_fact task, we get the expected result:
ok: [node0] => {
"foo": ""
}
Replacing ‘foo’ with ‘bar’ as a value in the set_fact task we get the following, which makes more sense:
ok: [node0] => {
"foo": "bar"
}
But let’s say bar is also a variable, like this:
---
- hosts: node0
vars:
foo: ''
bar: 'baz'
tasks:
- set_fact: foo='bar'
- debug: var=foo
What should we expect here? ‘bar’ is not evaluated as a variable and the string value is used as is, so we get the same result as when bar was not defined.
When using double quotes, bar is expanded and the resulting value is ‘baz’, in the ‘foo’ case, we still get the same undefined message.
I couldn’t find a definitive statement anywhere in the documentation that would clarify the expansion rule for variables inside quoted statements. Are single quoted string supposed to be expanded or not? If so, is it now impossible to assign a variable it’s own name as a string?
I very much like Ansible, but this is a good example of how permissive syntax can shoot you right back in the foot. There are many ambiguities and tricky corner cases, sometimes due to functionalities delegated to some other piece of software involved (eg: yaml syntax limitations, reserved python keywords, jinja2 oddities, …).
In general, Ansible’s scoping model is perhaps too complex, with 16(!) levels of precedence and specificity, irregular definition mechanisms (eg: tasks can define vars, but not blocks? rly?), and subtle behavior oddities like the one highlighted here only amplifies that complexity. In simple cases this is not much of a problem, but when trying to create generic, truly reusable roles, it can cause many headaches, and in the long term can even be a thorn in Ansible’s side, as once this model gets adopted it will be hard to replace it with a clean design. This seems to be affecting pretty much any declaritive systems that start adding partial scripting capabilities. At first it’s just a bonus, then it becomes the reason you use it, and features are piled in to fill gaps as the need comes. At some point you have to wonder if it wouldn’t be a better idea to define a proper programming language from the ground up. An alternative starting point would be to use a language that is in itself it’s own input data, like Lisp or Scheme.
Ansible gives you freedom of boole: yes,no,true,false,True,False! Not quite. Well, it depends…
---
- hosts: node0
vars:
foo: no
tasks:
- debug: msg='this is not the boolean you're looking for'
when: foo == no
If we forget the fact that the when conditionals are actually jinja2 expressions (and the syntax helps us forget this so well), then we’d expect this playbook to work just fine, but no is undefined in jinja2, where booleans are true/false (although it now permits capitalized versions too).
{{ }} can be used to expand a variable. What if you need a string with {{ }} in it?
---
- hosts: node0
vars:
foo: '{{ bar }}'
tasks:
- debug: var=foo
ok: [node0] => {
"foo": "VARIABLE IS NOT DEFINED!"
}
Oups! As said above it seems that single quote statements are expanded (in some cases, not others, not by the same rules as double quoted string). The jinja2 syntax for escaping this is actually: "{% raw %}{{ bar }}{% endraw %}"
. Probably the most cumbersome escape syntax ever designed. (The liquid templates used in this blog too exhibit the same nonsense)
Oh, but wait, can I write this?
---
- hosts: node0
vars:
foo: "{% raw %}{% raw %}{{ bar }}{% endraw %}{% endraw %}"
tasks:
- debug: var=foo
fatal: [node0]: FAILED! => {"failed": true, "msg": "template error while templating string: Encountered unknown tag 'endraw'.. String: {% raw %}{% raw %}{{ bar }}{% endraw %}{% endraw %}"}
Of course that first endraw is in the way, but that’s going to make things complicated.
This post here explain how to escape the escape sequence. And we could avoid those unbalanced-looking braces by using this awful workaround:
{% assign oTag = '{%' %}
{% assign cP = '%' %}
{% assign cB = '}' %}
{{ oTag }} raw {{ cP }}{{ cB }}{% raw %}{{ bar }}{% endraw %}{{ oTag }} endraw {{ cP }}{{ cB }}
In all honesty, this is absolutely horrific. And you do not want to see the markdown for this bit. Another alternative in Liquid’s case is to use html entities, but that’s assuming your target is html at all.
Note that ‘%}’ had to be split into two variables because the parser captures the first ‘%}’ it sees no matter what context it encounters it in. Which causes the need to escape the escape sequence’s delimiter. This points to a rather inapropriate lexer being used, which perhaps was not intended for this type of work.
The block documentation makes the following claim: Most of what you can apply to a single task can be applied at the block level. Apart from notable exceptions like one of the most desirable features for organizing your code, namely vars:, that statement holds up, but with a slight twist.
---
- hosts: node0
vars:
foo: true # let's use j2 compatible syntax :P
tasks:
- block:
when: foo
- debug: msg='not so fast mate...'
That should work right? Sadly, no, it doesn’t. Tasks can have when statements at the beginning, but not blocks. Which leads to some unfortunate situations:
---
- hosts: node0
tasks:
-block:
- shell: /bin/do --some=stuff
args:
many='args'
such='details'
very='real world use case'
- ...
- ...
- ...
- ...
- ...
- ...
when: very_lonely_down_here and not_quite_sure_im_aligned_right
When reading a long list of tasks and blocks of tasks, the condition for which that item will play ought to be one of the first things on your mind.
No hard feelings here though, combining different tools together allows you to build impressive systems very quickly, albeit with some coherence collateral damage…
]]>This is not very useful, but by running it with various parameters and seed one can notice some dead zones where very few values are produced (especially visible when using a 64x64 space division). Of course it is common knowledge that this PRNG is not cryptographically secure, and has otherwise very good properties for other uses. Nevertheless a bit of an intriguing curiosity.
This was originally posted in the previous version of this blog, so here it is again.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 |
|
The objective is simple: run a bash shell in a pid namespace with it’s own network namespace, bridged with the host’s network.
This is simple enough, use ‘unshare’ like this:
1 2 3 |
|
In order to isolate the networking too, we can use ‘unshare’ with a new network namespace via the ‘ip’ command:
1 2 |
|
You should now have an isolated bash process, with it’s own network stack:
1 2 3 4 5 |
|
Network interfaces can only live in one namespace, in order to communicate between namespaces we can use veth device pairs which provide a sort of pipe behavior.
We can create a veth pair like this:
1
|
|
On my Debian Sid system this didn’t create ‘veth-b’, and subsequent commands failed with ‘Cannot find device “veth-b”’. So I had to issue the reciprocal command:
1
|
|
Now, we’ll assign one end of the veth pair to the ‘foo’ namespace:
1
|
|
It should now appear under the ‘eth0’ name from inside our isolated bash shell. We can assign it an appropriate IP address normally with ‘ip addr add
On the host’s side, we now want to bridge ‘veth-a’ with ‘eth0’ in order for our little container to access the network.
Warning: this might break your host’s network temporarily.
1 2 3 |
|
Make sure they are all in state UP, and we should be able to ping the outside world from our isolated namespace. Now, the hosts was probably setup to use ‘eth0’, and as it is now part of the bridge it will not work anymore. To get an equivalent setup, we’ll need to transfer the IPs assigned to ‘eth0’, to ‘br0’, and update our routing table. (I’m assuming a very simple setup here, e.g.: my laptop connected via ethernet with a static ip)
1 2 3 4 |
|
Now, your host’s networking should be back on track (actually, I had to bring interfaces down and back up for it work), and you can ping your host and isolated namespace transparently using their respective ip addresses. A thing of beauty.
Of course you probably don’t want to start setting up systems manually like this, but doing so does help me grasp a bit better how LXC, Docker and the like are working.
]]>1
|
|
It turns out that in the current state of Debian Sid/unstable, there a few confusing package dependencies as gqrx-sdr depends on libgnuradio-*3.7.5, and not the most recent 3.7.8 versions which are installed by the gnuradio package. Furthermore it also depends on libgnuradio-osmosdr0.1.3 which depends on some libboost packages which break with the recent libstdc++6 ABI change.
To get around this without too much recompile pain, one can install gnuradio 3.7.8 (+ gnuradio-dev) and manually compile libgnuradio-osmosdr, gr-osmosdr and gqrx-sdr.
As of yesterday, the latest revisions of osmocom and the 2.3.2 source of gqrx-sdr were compiling without issues.
I haven’t had a chance to test it much yet, will edit with more notes when I do.
EDIT: It works. Just make sure to have libhackrf-dev installed before compiling the osmosdr gnuradio extension, otherwise hackrf devices won’t show up in gqrx.
]]>It started with a Linux amd64 VMWare disk image I wanted to load into VirtualBox and systematically got the following error:
1
|
|
Looking at the BIOS cpu config all virtualization options were enabled.
Of course the i5-M520 cpu on the X201 supports VT-x, SME, TXT and such…
1
|
|
And of course the VT-x flag was set in the VBox guest configuration…
Through all the forums and bug reports out there the closest I found was a Virtual Box bug report that got closed as “worksforme”. Very encouraging this was.
Some users with similar issues seemed to be able to function by using a prior VirtualBox version (< 4.3), but Oracle did not provide packages for Sid and the Wheezy package is not compatible with the latest kernels.
After giving up on BIOS upgrades due to flaky supposedly bootable ISOs from Lenovo I turned to KVM, assuming this was probably a VirtualBox issue.
But it turned out KVM would not work either. /dev/kvm would not be created and the following warnings showed up in syslog:
1 2 |
|
This confirmed for me that this was really a BIOS issue after all. I remembered seeing a reference to conflicts between Intel AMT and VT features in some Lenovo BIOSes, but AMT was definitely disabled and had been for ages.
Then I tried the following:
At this point I had a /dev/kvm and was back in business.
It’s still unclear however weither there was some state confusion in the BIOS about the virtualization options, or with AMT, and if AMT was indeed conflicting.
]]>And finally got around to re-style this space. Hopefully less retinas will be hurt in the future.
]]>