My gear

Based on the gear page of Paul Grevink (just use google), I decided to publish my lab setup as well. Since it is based on Paul’s gear you see some similarities. We both have the Synology (218+), but admittedly we both had that already before we got to know each other.

Gateway

My gateway running the KPN fiber line we have is a Unifi Cloud Gateway Fiber, hosting a couple of VLAN’s, IGMP Proxy for the KPN iTV and IPv6 as well. It is managed through the Unifi Network Application, that also drives the other Unifi switches and accesspoints. It is relatively easy to setup and manage and works so far without a charm. Which is way better then my experience with the RB5009 that ran the network before.

Switches

My household uses a few switches, a unifi US-8 with POE out is hosting a few connections downstairs, and in my lab I currently have a TP Link Omada SG3428. As ‘intermediate’ switches I use the cheap Ubiquity Flex Mini’s to passthrough the VLAN’s and one Ultra 60W, which powers one of the access points and one of the Flex Mini’s. I am considering replacing the Omada with an equal or faster Unifi switch at a certain point.

Accesspoints

There are three AP’s in the house, on the bottom floor, one at the bedroom floor and one on the top floor. Driven by the Ubiquiti U7-Pro.

VMWare cluster

My VMware cluster (as virtualisation engineer you must have this), consists of three Nuc13 PRO’s (RNUC13ANHI70002), with Samsung 990 PRO 1TB SSD’s and Kingston A400 480GB SSD’s and a small 64GB M2 local disk. Each machine has 64GB of Samsung Fury memory, and an additional NIC to support a distributed switch that has multiple uplinks. This forms a VSAN cluster, which hosts several VM’s, amongst them my playground machines, home automation, and things like databases. The NUC’s have a Acadis NVME thunderbolt adapter attached, where an additional Samsung 990 Pro 1TB SSD is attached. These additional disks are used for NVME memory tiering, giving the nodes an additional ‘256gb’ of memory.

The fourth node in this cluster is an almost equal machine. It is a ‘12’ core i7 node, also running with 64GB of memory, a quad nic pci-e card, a 480GB Cache disk and a Samsung 990 PRO 1TB M2 SSD disk.

The cluster runs various Unix VM’s as well as vLCM, Aria Operations, Aria LogInsight, vIDM and when needed an NSX-T node, even with these memory blocks available, I am constrained on it, so some VM’s run only when needed.

NAS

Since I cannot afford a SAN at home, I have a NAS that is driven by the Synology DS218+ having 2x4TB (raid1) storage available. Large VM’s land here, as well as servicing internal data and for backups.

A second Synology DS1522+ serves the proxmox node and various VMware VM’s, I actively use the 4x1GB Ethernet connections in a dual LAG setup for NFS4 access (The disks are the bottleneck). This Synology has 2x4TB Raid1 storage and 2x3TB Raid1 storage available. Important data from the DS218+ is also mirrored on this node.

Probe

Along the line at some point in time I got a RIPE Atlas probe (the hardware version) that runs since 2016 already, and only had a couple of USB sticks replaced :-).

Offline equipment

The following hardware is what is still in ‘storage’ but are offline:

amountdescription
1xMikrotik CSR125
2xUnifi AC PRO
1xMikrotik RB2011
1xMikrotik RB5009
1xExperiabox 12
1xUnifi Flex Mini
1xUSB Nic