I'm setting up a VMM 2012 SP1 "private cloud" in my lab. I have a Dell C6220 with a total of 5 physical hosts. 1 host is the VMM 2012 Sp1 box and the other 4 are Hyper-V hosts. Each of the Hyper-V hosts has 2 physical NICs. Nic 1 on all 4 boxes is plugged into a switch with a 147.191.x.x subnet for management and has a static IP for it. Nic 2 is plugged into another switch with 10.21.x.x subnet access and each NIC is setup with a static IP address. Both nic1 and nic2 were pingable before I did anything in VMM. I have not configured any virtual networking locally through the Hyper-V console. I created my logical network in VMM and IP pool and all that and I believe it looks good. I am trying and go through the creation of a logical switch to take advantage of the centralized management and it looks ok.
When I do that and add a virtual switch to the hardware section of each host (and assign it to the correct NIC), the local network configuration for nic2 on each hyper-V host goes away, and so do the check boxes next to IPv4 and 6 and other stuff. Now that static IP address is no longer pingable. Just for giggles, I deploy a template to the hose and of course it has no network, why would it? I don't know what I'm doing wrong here. I just can't seem to get networking straight for my cloud.
I am just trying to setup very basic networking. All VM traffic goes through nic2 and mgmt goes through nic1. Those check boxes (management and placement) are setup correctly on each host under properties-->hardware. I'm not trying to
do any teaming or anything fancy. I am not sure if the logical switching is by design supposed to wipe out the NIC settings on the physical NIC or what. so confused. Any kick in the right direction would be very much appreciated.