A recent tweet caught my eye: a new version of NX-OSv was available, together with instructions on setting it up in vagrant. Very good timing too, as I'm building automation (a bit of orchestration and a lot of validation) for a couple projects including for both the 7K and the 9K flavours of NX-API and could really use a decent machine-local lab.
UPDATE:
The first two posts in this series detail the path to the solution that I'm presenting in part 3. The journey taught me a lot, so if you're not in a hurry, it's worth reading through.
First boot
Follow the instructions in this DevNet article to download and start a fresh Nexus 9000v image within a vagrant environment. The point of using vagrant is to be able to easily create and destroy development environments, so I will be looking to get rid of all manual steps.
I used version 7.0.3.I6.1
for this article. The box Cisco distributes does not have version metadata in it, so it will show up like this once it's added.
> vagrant box list
n9000v (virtualbox, 0)
# The 0 there is the box version, which is missing.
I tried to repackage with additional metadata, but vagrant doesn't like it for some reason and I can't figure out why - the official documentation is not helpful at all.
Repackaging after setup
Hank Preston offered a great idea - after doing the first boot and setup, why not repackage the box locally so that any future instances are cloned from it? Exactly what I was looking for, so let's see how I did it.
Stop the VM you set up in the previous step if it's still running (vagrant halt
) and open the VirtualBox GUI. In there you should see a VM named something like n9000v_default_somenumbers
.
This is optional, but seeing as the NX-OSv 9000 documentation says the VM needs 4GB of RAM minimum, I am going to lower the 8GB default that comes with the box so I can run more of these on my laptop. Open the settings panel for the VM and set the RAM according to your use case.
Now we're ready to create a new box based on this VM. Back to the CLI in the same folder with the Vagrantfile
from the setup.
# Package a box from the VM
> vagrant package --output n9000v-4gb-ssh.box
# Now add it back, don't use the same name though as you'll get an error!
> vagrant box add n9000v-4gb-ssh.box --name n9000v-4gb-ssh
# Check that it's in the list:
> vagrant box list
n9000v (virtualbox, 0)
n9000v-4gb-ssh (virtualbox, 0)
Our new box is now ready to be used!
Connecting 2 switches
Once I had one of these running, I immediately wanted more (doh!): how about starting 2 switches and connecting a couple of their interfaces via Virtualbox internal networking?
Hank had already worked on this particular problem so he quickly published a base Vagrantfile for interconnecting two of these boxes on github which provided me with a great blueprint.
After starting the two n9ks I tried to get them to communicate, but no luck. After doing some digging, I find buried in documentation somewhere that the network adapters in Virtualbox have to be set to "promiscuous allow-all" for any data-plane to work between the VMs. Fair enough, I added the snippet below to the Vagrantfile config of each node. The 2
in nicpromisc2
refers to the second (in the order it was added to the config) adapter of that VM.
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
v.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
end
A restart later, I could see that they were indeed connected, but still couldn't ping.
n9k2# show lldp neighbors
Capability codes:
(R) Router, (B) Bridge, (T) Telephone, (C) DOCSIS Cable Device
(W) WLAN Access Point, (P) Repeater, (S) Station, (O) Other
Device ID Local Intf Hold-time Capability Port ID
n9kv1 Eth1/1 120 BR Ethernet1/1
n9kv1 Eth1/2 120 BR Ethernet1/2
Took me longer than I'd like to admit, but I finally found the problem: both n9k1
and n9k2
use the exact same MAC addresses on their interfaces!
I think nxosv
takes the base MAC at the initial boot and whatever you do in Virtualbox afterwards won't be reflected in the MAC addresses nxosv
assigns to its internal interfaces. Bummer.
Just to test, I modified the MAC address on the n9k2
interfaces to something else and, boom, ping started working. While it's all fine and dandy to go in and fix things manually, it's not the point of this whole exercise.
What I'd like to have as a final result is that multiple nxosv
VMs can be brought up by vagrant and are able to communicate to each other. As you can see in the final (for now) Vagrantfile below, I added the options for setting a base MAC address for each VM and each individual adapter's MAC. It can be verified from the Virtualbox GUI (vagrant does its job) so I think to solve this we really need some help from our DevNet friends.
Vagrant.configure("2") do |config|
# Deploy 2 Nodes with two links between them
config.vm.define "n9k1" do |node|
node.vm.box = "nx9000v"
node.vm.base_mac = "0800276CEEAA"
# eth1/1 connected to vboxnet1, auto-config not supported.
node.vm.network :private_network, virtualbox__intnet: "nxeth1",
auto_config: false, mac: "0800276CEE15"
# eth1/2 connected to vboxnet2, auto-config not supported.
node.vm.network :private_network, virtualbox__intnet: "nxeth2",
auto_config: false, mac: "0800276CEE16"
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
v.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
end
end
config.vm.define "n9k2" do |node|
node.vm.box = "nx9000v"
node.vm.base_mac = "0800276DEEAA"
# eth1/1 connected to vboxnet1, auto-config not supported.
node.vm.network :private_network, virtualbox__intnet: "nxeth1",
auto_config: false, mac: "0800276DEE15"
# eth1/2 connected to vboxnet2, auto-config not supported.
node.vm.network :private_network, virtualbox__intnet: "nxeth2",
auto_config: false, mac: "0800276DEE16"
node.vm.provider "virtualbox" do |v|
v.customize ["modifyvm", :id, "--nicpromisc2", "allow-all"]
v.customize ["modifyvm", :id, "--nicpromisc3", "allow-all"]
end
end
end
Now that the two images are set up, it's time for the real target of this exercise: the NX-API. In the second part of this series I start throwing stuff at it via the API and learn more about how vagrant and ansible really work.
And, as always, thanks for reading.