Category Archives: Technical

Technical items

SUE2014: Wrap up

snow-logo-200x67

As mentioned in my previous blog post, I went away after the CFengine workshop. I was so full of information and knowledge that I could not cope with additional inputs. I think that it was a very great day, the last time Snow did such an event, I didn’t even work there yet and I think Snow did a really good job after such a long time. Personally I would applaud this and hope to see this again next year. I challenge you as a reader of my blog to be there if that is possible, or to give a talk! Please contact me in case you are interested in that.

I cannot wrap up this blog-spree without thanking a few people from making this happen. The staff at Snow did tremendous work in little time to get this going, they really did a good job and handled everything fine! There were a few long days before the event started and everyone needed to work hard to make this a succes and I think they really earn a Thank You! from everyone whom visited the event (or are currently visiting the event!).

Hope to see you all next year!

Remko
Network and Security Consultant for Snow B.V. in Geldermalsen.

SUE2014: CFengine3 workshop / Mark Burgess

snow-logo-200x67

Instead of having more talks, I was invited to join the workshop from Mark Burgess around CFengine3. As someone who plays with puppet and have set that up on my VM’s and colocated machines  ( a good 10 machines, some servicing similar functions, some doing totally different things), I have some experience in configuration management, eventhough I am mainly a networking and security consultant :-)

What happend is difficult to describe, Mark is excellent in giving such a tutorial or workshop, and his motivation and energy is a real lot to cope with in little time.

The workshop itself was great, and that does not actually cover it as I mentioned but well this is the best word I can find about it. Comparing puppet and CFEngine is not actually fair, both do things in their own ways and have their way of doing it. Puppet has it’s advantages but clearly CFEngine has as well. As Security consultant I found that you are able to remotely tripwire a system, which means that in a group of machines, every other machine can check another machine for file changes and report to the hub (if you want) if there are differences. CFengine will try to fix these differences so that the system is again in the state you want it to have, still alerting you something was fishy and might need investigation. This all happends by default every 5 minutes. So basically you are most likely 2.5 minutes away from having a fix applied to your system automatically and still get the alert.

The concept of CFengine is that will only work on things that it is able to fix or repair for itself. “Self healing”. If that is not an option then you are most likely not using CFengine.

The syntax is a bit odd though, it’s something you really need to get used to since it is not as clear (for me) as a programming language where you have logic defined. Certain keywords describe the function of a variable and you can assign and request them depending on the keyword something will happen. That might get kinda ugly if you have $(variable[$(nested_variable)]), which is still pretty easy, but gets unreadable if you do complex things with it. Or at least that is my understanding.

Another thing that I miss is the standard library path. CFengine does some odd assumptions (my opinion) on where files are located. If you for example call out cf-agent -f $fileinyourdirectory it will lookup the $fileinyourdirectory in /var/cfengine/inputs/ instead of the directory you are in. You need to specify ./ in front of the filename to have it look in your current working directory (cwd). In addition some functions require the cfengine_stdlib file, which is fine but unless you are in the /var/cfengine/inputs/libraries directory, you need to either specify that it is in that directory, or a subdirectory instead of having it looking for that content in the default directories.

That said, since I am starting to learn puppet I will most likely stick around with it for a little while longer. CFengine surely has it’s potentials and I also had a little chat with Glen Barber from the FreeBSD Foundation yesterday, and he goes beserk on CFengine, things he showed me puzzled me.. but then again he gets puzzled by puppet stuff.

Many thanks to Snow and CFengine/Mark Burgess that I was able to be there, it was a hard three hours workshop, frying my brain entirely (I needed to go away immediatly after, because I couldn’t get more information in :-)) but it was well worth it. For me personally that was the best thing at the SUE2014!

SUE2014: Samba4 headsup / Jelmer Vernooij

snow-logo-200x67

After attending the talk of Mark there as a little break with coffee and we continued with Jelmer Vernooij’s Samba4 headsup.

This talk actually scared me a lot, as project member at the FreeBSD project, I know how big software projects could work, and that is not entirely how Samba works. Samba seemed to be lived by ‘tridge’ who every now and then wakes up from his cave and starts coding something, puts it all in one big file and throws it over the hedge hoping that people will pick it up.

Jelmer iterated through the history of the Samba project, and demonstrated that it is a hack-ish project with superglue attached so that things work and perhaps if you are lucky keep working. I somehow missed the actual headsup about Samba4 and forthcoming plans on where the project will head towards.

I now know that eventhough I find samba an amazing product which works fairly well most of the time, is actually one scary project and perhaps they need to do something about the PR around it. My suggestion would be that there is much more positive talk around the project, showing what you CAN do, demonstrating what Samba4 now has (AD Domain Controller support) and how that works as an example. And what hopefully will work at some point in time.

Another suggestion that I would have is to look at big operating system projects and appoint a ‘core’ team, that by consensus will lead the project and give an estimate on where things should be heading towards. Keep away from getting multiple developer communities to develop samba3 and samba4, it will be one big mess.

This talk didn’t actually give me what I hoped to get, Jelmer tried his best and he knows a lot, do not get me wrong, but the suggestion of the talk title and the actual talk were too far apart.

SUE2014: The collapse of complex infrastructure / Mark Burgess

snow-logo-200x67

As promised I would write up a little blog about  some of the talks I attended. The day went a bit different then first anticipated because I also followed the CFEngine 3 workshop from Mark. I will write something about that as well in a different blog post.

Mark talked about the collapse of complex infrastructures, and made parallels to the collapse of complex societies. Did you know that societies actually go through the same problems as we have in our infrastructures?

One of the key points that Mark pointed out is that we are good in making things bureaucratic, making things breakup so that everyone does it’s own thing, and with that thing you have a chain of different groups working on one product or service but all doing their own thing. The cost of communicating with eachother increases as the complexity of the company or social group increases. At some point this does not work anymore. And there is the collapse. This is not a problem perse, sometimes we need complexity to get our product or thing going, but we need to acknowledge that and guide the proces to for example ease the communications between those complex groups. They should be as autonomous as possible, lowering the cost of communications etc.

We really need to properly look at our information from society’s and use that information to make us better companies. We can learn from the mistakes that were made, but still we manage to keep making the same mistakes. The beast is made more complex, the cost of doing something increases and finally the beast is no longer servicing what we need.

I started my worklife at the ING Bank back in 2001, we saw this happening exactly there, split, split, split, split, instead of one group doing all the work with people with specialities, 7 groups went doing the work, and they were not able to properly communicate with each other. Getting things done did cost a huge amount of effort and time. The beast grew to wild and no one had autonomous powers to prevent this thing from happening. Later on I understood that this kept changing even after I left, I honestly have no idea how it works now or whether that performs as it should. But at least I can see real life examples in my work past where these things happened.

Snow Unix Event 2014

Today I will be at the office for the Snow Unix Event 2014 (SUE2014 for short). The day will consist of several interesting people like Mark Burgess who will talk about “The collapse of Complex Infrastructures”. Since we try to reduce complexity in larger and larger infrastructures this sounds like an very interesting talk.

Next up will be “The Samba4 headsup” by Jelmer Vernooij, there are rather large changes int he Samba4 code, for the first time you will be able to setup a domain controller with Samba in the Active Directory world, Jelmer is going to talk about that and more.

My collegue Martijn Posthuma will talk about RHEV/RHSS and how you can use an hypervisor for storage functionality. Since I do not know much about how that all works, I am very interested in hearing what the possibilities are.

As pre-final of the day John D. Cook will inform us how you can build an reliable system on top of unreliable parts.Since that is what we all do I am eager to learn more about this.

Finally on the day itself is Ronny Lam who will talk about Software Defined Datacenters, SDN and NFV. Ronny is an old collegue of us and joined NetYCE a few years ago, doing the things he will talk about. Central orchestration and defining how things should look  is the main thing companies are heading if you ask me.

The day seems interesting and I will write about the events as they pass by. They will be prefixed with SUE2014: Talkname / host, so that you can easily read what the talk is about and how I think we can use this.

Ofcourse I hope that we will redo this all next year, and since you are most likely already too late when you read this (except a few who are already on the guest list :)) and that you will be there as well, either as someone who gives the talks or as someone who will follow the talks. Stay tuned!

http://sue2014.snow.nl  & http://snow.nl

iSCSI

So. Last night I figured that I wanted to play with disks delivered from my QNAP 659-pro. I ofcourse can mount disks directly via NFS but that is less awesome (sorry, we are watching Chuck via Netflix, and captain Awesome.. is just.. awesome) then having a virtual disk ready for you with a predefined size. At home I have two fully loaded mac Mini’s , with VMware Fusion on it. Combined I run like 12 VM’s, all with a minimal disk attached via VMware. But ofcourse those are only local and are not backupped at the moment, because of space requirements.

Last night while playing with the QNAP settings, I got distracted by this thing called iSCSI, and I know that one can use that to provision disks from the storage and offer that to the host, like it were a locally attached disk. Couldn’t get it working last night (tired), but now in the garden and in the sun, I noticed what went wrong. Examples talk about  a very different IQN that the clients use via FreeBSD. Ofcourse I should have figured that out last night but didn’t see it.
The correct IQN is iqn.1994-09.org.freebsd:hostname where hostname should be replaced with the name of YOUR system :-).

This way I got a disk:

da1 at iscsi13 bus 0 scbus3 target 0 lun 0
da1: <QNAP iSCSI Storage 4.0> Fixed Direct Access SCSI-5 device da1: Serial Number <my serial number>
da1: 20480MB (41943040 512 byte sectors: 255H 63S/T 2610C)

Which you can easily newfs, or partition as needed. Currently this does nothing locally, but I am going to consider having all storage required VM’s have a storage disk on the QNAP directly so that that data is safe, no need to backup the rest of the data because when I finally have puppet running that is just a matter of redeploying the host and reattaching the storage to have the machine available again.

I just love automation :-)

logo-full-thumb

Tarsnap backup script

Tarsnap

Tarsnap is an advanced online-backup facility, entirely encrypted. The only copy of the keys used to encrypt and decrypt archives are in your own possession, so things that should be kept safe, are (in the current form) safe. Tarsnap makes extensive use of the Amazon EC2 and Amazon S3 for storage.

Tarsnap is originally written by the FreeBSD Security Officer Emiritus’ Colin Percival, on topics that he periodic gives talks about at various conferences. If you are able, you should seriously attend one of those talks

Script

Recently I rewrote a tarsnap backup script from Tim Bishop http://www.bishnet.net/tim/blog/2009/01/28/automating-tarsnap-backups/ to a more suitable script for us.

Tim backups his data via Tarsnap, all via the same way. That works well for him, but for our hosting company that is more tricky. We do not want to keep large amounts of data for our customers (which tend to change rapidly, for example emails that come in and go out and get deleted etc.). Instead we want to keep the minimal amount of data for these customers, and we want to offer them more advanced backup strategies for which we calculate an increased price (the minimal backup strategy is free).

After collaborating, we decided that next to the free strategy, we would like to offer a medium-term backup strategy, and a maximum-term backup strategy, where the former is a month of backups (7 weekdays, 4 weeks), and the latter is three months of backups (7 weekdays, 4 weeks, 3 months), so that going back in time is doable. If customers want to have a customized strategy, that would ofcourse be possible if we add that to the script.

Since we are keen on open source we would like to offer you the option to download the script, and if possible even enhance it more so that we can all benefit from it. Do note that we didn’t try to complicate the script, but instead keeping it as simple as possible. That means that we add more lines then likely needed, but it is very readable. One comment from Colin we got so far is that Tarsnap is capable of removing more files in one go (tarsnap -d -f -f ) and that is not yet implemented in the script. We will consider doing so.. ofcourse :-)

The script can be found here, tarsnap.script.

20131013
Updated the script with the update from Tim, this had been tested and works fine for us so far. Thanks Tim ! I shamelessly used the code in our code ;-)

Spamfilters

After being in the field of a systems admin I have met a large group of various spamfilters. Even at home I played with some of them and found that they were quite effective. Though lately a decrease is being seen on my hosts that make me wonder. Spamassassin’s spamd (which I use) does not get any new updates for some time already. That might mean that all spam is the same as it always had been, or something else is fishy.

My spamfilters were quite effective till around two weeks ago. Effectiveness of around 90% suddenly decreased to 66% (as per spamstats.pl which reports the spamd output). I also setup some relays just a few days ago that filter a lot of known blacklists already and prevent them from even entering the system. Some of the domains that I manage had been altered to use them instead (not JR-Hosting domains that is).. I also see a large amount of blocks there (As I expected). But still for every 1000 emails that are being send every day at the moment (ok the volumes aren’t impressive :)) 330 (at least) are spam in the last two weeks.

I could retry to train my filters again ofcourse (and train around 200 emails or so instead of too much). If there are suggestions, please let me know!

[lang_en]ZFS paritions recreated[/lang_en][lang_nl]ZFS partities opnieuw aangemaakt[/lang_nl]

[lang_en]This weekend I didn’t invest as much time as I wanted for FreeBSD and the like. I was recreating my ZFS partitions. Why? They worked fine right? Yes they did. Though they were controlled by a highpoint controller, offering a pseudo raid setup (software). Ed Schouten recently told me (something I already knew but started me thinking) that ar0 is uncapable of supporting raid-5 in the first place. So my raid configuration might not have been a raid configuration after all.

This made me feel like: do my periodic backup on DVD (backing up this much data is interesting on DVD, I can tell you that :-)) and I copied over all data to my workstation (which almost has the same storage capabilities, only that’s just two disks, not doing redundancy at all), and burned the beasts on DVD (still am actually).

I destroyed my old ZFS pool (just a disk ‘ar0′) and in the highpoint controller bios I disabled Raid-5 and restarted with just 4 disks. I recreated my ZFS storage with “zfs create raidz data ad14 ad15 ad16 ad17“ which fired up the storage area again. After that I started copying back the data. I copied almost 500GB this weekend already and its still going (and will be for the rest of the night I think). I liked the idea of having my data backupped on my workstation as well, so I’ll probably rsync the storage area every period of time. Probably during the day when I am not home at all, so that the gig’s of data can flow over the wire without doing harm (gbit network). At least its saves a little and I can turn down my machines where needed.

I will also restart my /storage/nfs folders so that I might be able to have my ultrasparc (donated by Robert Blacquiere) do much nicer things again by using networking storage (only fastethernet here (100mbit) but that doesn’t kill the pleasure :)).

So yes not much activity for FreeBSD itself; but laying the foundation to be able to do something like that more again (ultrasparc tindy perhaps?)….[/lang_en]

[lang_nl]Dit weekend heb ik niet zoveel geinvesteerd qua tijd in FreeBSD en dat soort dingen. Ik heb daarentegen mijn ZFS partities opnieuw aangemaakt. Waarom? Ze werkten toch prima? Jip dat deden ze inderdaad. Echter ze werden aangestuurd door een highpoint controller, die een soort van pseudo raid oplossing aanbied (software based). Ed Schouten vertelde me onlangs (iets wat ik al wist, maar wat me aan het denken heeft gezet), dat ar0 niet in staat is om raid-5 uberhaupt te ondersteunen. Dus mijn raid configuratie is misschien niet zo veilig geweest als gehoopt :-)

Dit bracht me tot het volgende: doe een periodieke backup naar DVD (het is interessant om zoveel data op DVD op te slaan kan ik je zggen :))), en kopieer alle data naar mn werkstation (bijna net zoveel opslag capaciteit als de fileserver, alleen hier worden enkele disken gebruikt in plaats van raid-5 disken. Hierna heb ik de beestjes op DVD gebrand (hier ben ik eigenlijk nog steeds mee bezig).

Uiteraard werd daarna de oude ZFS pool verwijderd (Wat bestond uit een ar0 apparaat), en heb ik in de highpoint controller bios de raid-5 array stuk gemaakt en ben ik opnieuw opgestart met 4 enkele disken. Ik heb toen mijn ZFS storage weer opnieuw aangemaakt met “zfs create raid data ad14 ad15 ad16 ad17“ wat weer toegang gaf tot de opslag ruimte. Hierna ben ik begonnen met het terugkopieren van de data. In het hele weekend heb ik al minstens 500GB gekopieerd wat nog wel even doorgaat denk ik zo :-). Ik vind het wel een prettig idee om mijn data ook op mijn werkstation nog te hebben, dus waarschijnlijk zal ik deze periodiek met rsync synchroniseren. Ik doe dat dan denk ik overdag als ik toch niet thuis ben zodat de data gewoon benaderd kan worden zonder dat iemand er last van heeft!

Ik zal ook mijn /storage/nfs partitie opnieuw aanmaken zodat mijn ultrasparc (Gedoneerd door Robert Blacquiere) weer leukere dingen kan doen door gebruik te maken van netwerk storage, hier heb ik alleen fast ethernet snelheden beschikbaar (100mbit) maar dat mag de pret niet drukken! :-).

Dus inderdaad, zoveel heb ik niet voor FreeBSD zelf gedaan dit weekend, maar ik heb de fundering gelegd om mijn machines beter in te zetten voor het project (ultrasparc tindy misschien?)….[/lang_nl]

[lang_en]New USB Support incoming and ‘feature’ list 8.0[/lang_en] [lang_nl]Nieuwe USB Ondersteuning en featurelist voor 8.0[/lang_nl]

[lang_en]A long time discussions and development had been done on USB. Hans Peter Selasky (hps) had rewritten the entire USB stack and it seems to be moving forward enough so that Alfred Perlstein is willing to commit this into the FreeBSD tree.

Ofcourse things do not go without resistance, there are always (and will always be) people that are sceptic, against it, afraid of it etc. Not only developers, but consumers and people using this commercially as well. Nothing new there, nothing wrong there either.

So, with a bit of luck the new code will be in so that it can mature for the 8.0 release of FreeBSD.

Did you all know that there had been a lot of development for 8.0 btw? MPsafeTTY, USB, Dtrace, new ZFS drop, we migrated from CVS to SVN, we are working on LLVM (Contribute people!), even beter ULE support, Superpages got improved by alc@, NFS Locking and GSSAPI crypto, perhaps NFSv4, VImage (Marco Zec’s virtualized networking stack), and many many more new features that are not in the current -RELEASE tree’s, and will probably never be in these tree’s (it will be in future branches ofcourse). Ofcourse the above is no promise it will actually be in there, but as it currently looks like: chances are they will be there ;-)[/lang_en]

[lang_nl]Een lange tijd van discussies en ontwikkelingen komen binnenkort wellicht tot een einde voor wat betreft een nieuwe USB stack. Hans Peter Selasky (hps) heeft een compleet nieuwe USB stack geschreven (herschreven), welke inmiddels dusdanig vooruit gaat dat Alfred Perlstein de code wellicht in FreeBSD wilt importeren.

Uiteraard gaan deze dingen niet zonder weerstand, er zijn altijd (en die zullen er altijd zijn) mensen die scpectisch zijn, tegen zijn, bang ervoor zijn etc. Niet alleen ontwikkelaars, maar ook gebruikers en commerciele gebruikers. Niets nieuws, niets verkeerds.

Dus met enigzins wat geluk zal de code er op tijd in zitten om te rijpen voor de 8.0 release van FreeBSD.

Wisten jullie trouwens dat er heel veel ontwikkelingen zijn voor 8.0? MPSafeTTY, USB, Dtrace, Nieuwe ZFS code-drop, Migratie van CVS naar SVN, men is aan het werk met LLVM (hulp is welkom!), nog betere ULE ondersteuning, verbeterde superpages door alc@, NFS Locking en GSSAPI cryptografie, wellicht een NFSv4 instance, VImage (Marco’s zec gevirtualiseerde netwerk stack), en vele vele andre features die ik niet genoemd heb die in de -current release zitten maar niet in de huidige -release’s, en deze zullen waarschijnlijk ook nooit in die brances zitten (uiteraard wel in toekomstige branches). Dit is uiteraard geen belofte dat alles op tijd in de tree zal zitten, maar zoals ik het nu kan zien maakt alles een goede kans! :-)[/lang_nl]