NSX Controllers east to west traffic.

In the typical network space with VLANs where we have our core,distribution and access layer. We typically run the default gateway as a HSRP address. Whichever router/switch is the active n its pair the mac address is sent to each individual hosts and they arp for their default gateway which would be the HSRP address. Otherwise, for east to west traffic in the same VLAN a arp request is handled by the VM to the physical network to ask how to find another mac address it is trying to talk to.

Thinking differently in the network overlay world since we are overlaying networks and tunneling all traffic something has to sit off to the side and tell us how to reach our default gateway and how to get to guests within the same subnet. This concept is known as a controller. A controller will build routes and tell a ESXi host how to find the mac address of a corresponding VM or how to reach its default gateway which is otherwise known as a LIF or logical interface that lives on either a Logical distributed router or a edge service gateway. The controller is responsible for telling the ESXi host where to send the traffic next hop wise.

In this example I am showing two physical hosts, two VM’s in the same VXLAN and two controllers.

NSX-arp1

So lets first start a ping from Tenant-A-1 to Tenant-A-2.

ARP-2

So everything is pinging YAY.

Now how does this work? If you recall I said the controllers are whats making this possible. One of the controllers has to tell the ESXi host where Tenant-A-1 lives how to get to Tenant-A-2.

So lets log onto the controllers and run some commands to figure out how it knows this is possible.

Arp-3

What the command show control-cluster logical-switches vtep-table 5000 is telling me is I have 2 hosts that participate in that VXLAN/VNI 5000. 192.168.3.101 and 192.168.3.100. The first host will be referenced as Connection-ID 3. The second Connection-ID 2. This will make sense in a few seconds.

Okay, so that is great thats how that works now how does the ESXi host know where to send traffic to while these are pinging?Arp-5

The following two commands explain it all here. The first shows the mac addresses of both VM’s on VNI/VXLAN 5000. The second will explain how to resolve arp to ip the same way a normal Layer 3 router or switch would. So the controller tells the host how to get to each corresponding VM it needs to get to. So lets VMotion everything to live off of 192.168.3.101 and see what happens.

Arp-6

There you have it the controller knows how to get to the two mac addresses of either tenant-a-1 or tenant-a-2.

Advertisements

VCP-Nv announcement / Lab

One of the lab announcements from Vmworld 2014 is a new VCP track for network Virtualization. There blueprint can be found I also posted a previous post on basic NSX concepts. I have been through quite a few installs of it. However, like anything else I would like to master the technology around it. I work with everything except for the firewall and load balancers today. I have created the following topology.

NSXphysical

Physically I have 2 Dell 1950 servers running ESXi5.5 and 2 whitebox servers I have normally used for all sorts of testing. I really like the nested 2 vcenter setup. So in my setup on my white box servers I am running vcenter,vcac and the NSX manager on 2 hosts. The ESG,LDR,controllers and guests are all running on the 1950s. It makes sense to separate the control plane and data plane.

Logically this is how my lab is setup.

NSXlogical

Looking at the logical setup I have 3 tenants. Tenant A,B and C. Tenant A runs OSPF to a SVI Vlan 100 from the ESG. Tenant B runs BGP where OSPF is redistributed back into BGP and BGP to OSPF. Tenant C does not have a LDR just a ESG since I wanted to experiment with that concept. But, like I said in my last post this is exactly what we do with the network today everything simply exists in the hypervisor.

Since I still receive emails today about people using my prior CCNP notes I would like to kick off a similar blog here where I put together notes or topics related to the VCP-Nv. The problem anymore is really finding the time.

Building Virtual networks with VMwares NSX

I have had the time over the last 3 weeks to start setting up NSX along with some help from VMware. I myself have been looking forward to something similar to this for a long time. The chance to do networking on a broad scale of deployment where I do not have to use physical networking gear. I will look at this from a network engineers perspective and not a system admin / virtual administrator. I will quickly highlight some information on NSX terms that will be used.

-ESG Edge Services gateway. This is the edge of the NSX network that allows NSX to reach out to the physical network ie BGP,ISIS,OSPF or Static
-LDR Logical Distributed Router. This is similar to a DVs. This is a router that spans multiple hosts inter or intra clusters. Which allows for Logical interfaces / Distributed DFGW.
-VXLAN A VXLAN is similar to a VLAN in the Layer 2 world. VXLAN is where most of the magic happens where we can virtualize our networks.
-VTEP VLAN tunnel end point. A VTEP is a IP address that each individual ESXi host receives. They will build tunnels between each ESXi host in order to overlay networks.
-VXLAN Bridge Allows bare metal devices to participate in the same subnet as NSX.
-Transport zone. A transport zone allows a large overlay so that ESG and LDR can talk to each other similar to running a VLAN between multiple routers or switches.
-NSX manager The manager speaks back and forth to Vcenter.
-NSX Controllers There are three NSX controllers that will push routes down to each VTEP telling each VTEP how to get to each server.

Alright I am glad that is over. So I will go over the design I decided to use. Mine is a bit complex I was lucky enough to use Nexus 7700s and Nexus 56128s in a leaf and spine setup.

NSXPhysical

So physically this is how my setup looks. I am using 2 ESG’s for redundancy. Each ESG peers with a respective 7ks. Between edge routers and LDR’s I am running OSPF as a dynamic routing protocol. This is extremely similar to how we are doing networking today there is not much of a change. Except for the way I am doing eBGP between the edge routers to the 7ks. I will explain that in a later blog post but I am using OSPF as a recursive lookup.

This design also pushes Layer 3 out to the edge. Which is great because us network people like layer 3 over layer 4.

Logically this is what my design would look like taking out the underlay out of the equation.

NSXvirtual

Logically everything is the same. The idea here is that we are decoupling the physical network and overlaying it. This makes for a great idea as I can spin up as many edge routers as I want to. The ESG and LDR’s are simply VM’s which reside in a cluster.

So how does everything work within NSX from a data flow perspective?

If the VM’s I have posted within 10.1.65.0/24 want to talk to each other the flow is relatively simple. Each VM will be forwarded up to the LDR. The LDR then checks through the VTEP over to the NSX controller to see which VTEP it would traverse for east west traffic. For traffic that is on a different subnet similar flows will happen. Traffic will hit the LDR and be routed across its respective VXLAN.

Some known gotchas for anyone deploying NSX in the future.
Controller and VTEP has to have connectivity to each other.
Manager has to have connectivity into VCenter and use a SSO account
Never ever try to firewall VTEP traffic it wont work out so well
VTEP tunnels will not work with multipathing. ie if I have two VTEP tunnels per ESXi host I will only use one for forwarding within 6.0.4 release of NSX.

IOS XR RPL examples

Here are a few examples of creating IOS XR RPL’s the idea is still vastly the same as route-maps with the difference of live editing similar to the way a file would be edited in vi for linux. I really like XR its better than any OS cisco has every came out with. I will start off with a example of local-pref and community strings then throw it all together in how it would be setup in XR.

Modifying local preference
IOS

route-map LOCPREF permit 10
set local-preference 200

IOS XR

route-policy LOCPREF
set local-preference 200
end-policy

Adding no export to a community string

IOS

route-map NO_EXPORT
set community no-export
end

IOS XR

route-policy NO-EXPORT
set community (no-export)
end-policy

Some take aways with IOS XR that has to be done is if there is a eBGP peering with a upstream neighbor
anywhere within that RPL has to be the use of the pass functionality. So if I had a peering like so

router bgp 1
address-family ipv4 unicast
neighbor 2.2.2.2
remote-as 2
address-family ipv4 unicast

Without a RPL facing 2.2.2.2 I will receive zero routes from 2.2.2.2 so in most demonstrations or IOS
XR best practices there will be a pass command put into a EBGP I like to push mine like so.

route-policy EBGP_PASS
pass
end-policy

So the config turns into the following.

router bgp 1
address-family ipv4 unicast
neighbor 2.2.2.2
remote-as 2
address-family ipv4 unicast
route-policy EBGP_PASS in
route-policy EBGP_PASS out

So just for some more examples of IOS XR lets say I want to tag 10.1.0.0/16 and 10.2.0.0/16 to
no export community strings but let everything else go untagged community wise. First we create
whats called a prefix-set in XR

prefix-set NO-EXPORT
10.76.0.0/16,
10.77.0.0/16
end-set

This is similar to a prefix set how ever there is one really awesome thing about prefix-sets that
are different from XR you can edit them without potentially breaking anything. Once editted I can
add anything else without having to remove a prefix-list like traditional IOS or add a seq number
somewhere along the path.

prefix-set-before

So lets continue on I want to set NO-EXPORT prefix-set to be tagged by the community string but let
everything else go here is how I would set that up.

prefix-set-after
route-policy NO-EXPORT
if destination in NO-EXPORT then
set community (no-export)
pass
elseif
pass
endif
end-policy

So lets take a look at this policy. If in the routing table it matches prefix-set NO-EXPORT then
set 10.1.0.0/16 and 10.2.0.0/16 to (NO-EXPORT) okay if it did not match that elseif pass and it
ends the policy.

There are some other community strings but you get the jist of it. You can also edit a RPL in the
same manor you can edit a prefix-set.

Quick storage notes

I look over these notes generally when I am zoning out a new server or trying to remember some functionality. I have been heavily involved with making a lot of storage san switching lately not just in the FC world but in the FCoE world its been I havent experienced a melt down just yet.

storage notes

Port types
For end devices
N_port – > End host
NL_port- > end host in a artbitrated Loop

Configured on the switches
F_port – > Switchport that connects to a Node port
FL_port – Fabric Loop port. Where you would plug in the storage.

E_port – > ISL port
TE_Port – > Expsnasion port / Extended ISL passes vsan tags.
TF port needs to be ran to a hypervisor similar to a dot1q to a hypervisor without merging fabric or push STP down to a server in the ethernet world.
Addressing

WWNS – 8 byte Similar to a mac address
FCID – 3 bye similat to a IP address The SAN switch makes it

WWNN – Is a Address asigned to the Node each Server gets one.
WWNP – Physical address of a port Like a MAC each HBA gets one

FCID – This is where you route traffic to.
*Domaine ID
Each Switch gets a Domain ID
*Area ID
Each switch have a area ID
End connection Port ID

Sh flogi database – > Gives you all the fiber logins
MDSA# sh flogi database
——————————————————————————–
INTERFACE VSAN FCID PORT NAME NODE NAME
——————————————————————————–
fc1/1 1 0x33000d 10:00:00:00:c9:84:b1:c7 20:00:00:00:c9:84:b1:c7

Device alias makes things easier as you can create a node name or port name and match it do a device alias when devices are zoned.

Fiber channel logins

Flogi – N_port sends to F_port to register
Plogi – Used to write to the target
PLRI – FCP application sending traffic.
SD – Span port for Fiber channel
NP – Node port virtualization

FCNS – Similar to ARP Resolves WWN to FCID

#Fiber channnel name server.
MDSA# sh fcns database

VSAN 1:
————————————————————————–
FCID TYPE PWWN (VENDOR) FC4-TYPE:FEATURE
————————————————————————–
0x33000d N 10:00:00:00:c9:84:b1:c7 (Emulex) scsi-fcp:init

HSRP Active Active with VPC+ and Anycast HSRP

In this blog I will quickly demonstrate with two similar topologies how HSRP would work in a active active state for HSRP. First using VPC+ and secondly, using Anycast HSRP with multiple Spine switches.

The first requirement for both devices would be to use fabric path. I have Fabric path enabled on all ports connecting from leave to spine. Here is my topology for VPC+. For most of you out there wondering why we need VPC+ with fabric path we need it for the active active. Without it we would only have HSRP on on spine switch.

VPC+This

HSRP is simply running between spine switches I will use VLAN 2 for example.

Spine 1

interface Vlan2
no shutdown
no ip redirects
ip address 10.0.2.1/24
no ipv6 redirects
ip router eigrp 2
ip pim sparse-mode
hsrp version 2
hsrp 2
preempt
priority 110
ip 10.0.2.254

Spine 2
interface Vlan2
no shutdown
no ip redirects
ip address 10.0.2.2/24
no ipv6 redirects
ip router eigrp 2
ip pim sparse-mode
hsrp version 2
hsrp 2
preempt
priority 110
ip 10.0.2.254

vpc domain 1
peer-switch
role priority 100
system-priority 100
peer-keepalive destination 192.168.1.1 source 192.168.1.2 vrf vpcka
delay restore 25
peer-gateway
auto-recovery
delay restore interface-vlan 1
fabricpath switch-id 10
ip arp synchronize

IP ARP Table
Total number of entries: 1
Address Age MAC Address Interface
10.0.2.254 – 0000.0c9f.f002 Vlan2

Now lets check one of the Leaf Switches in the diagram to see how they would get to the mac address of 0000.0c9f.f002.

Keep in mind 1/29 and 1/30 are connected to each Spine switch.

VPC+forwarding

In the show commands you can see traffic going to the HSRP mac will be forwarded over both links through the emulated switch-id. Without the emulated switch-ID traffic would simply forward to one link and one link only. So the emulated switch-ID is sort of a hack on fabric path and VPC to allow this type of behavior.

Alrighty, so now that the easy part is over lets take a look at our new topology!

HSRPanycast

In this example we will use VLAN 4 on the 10.0.4.0/24 subnet. With a HSRP address of 10.0.4.254/24. The config on the HSRP is the same as VLAN 2 however there are some additional configuration items that are different than the normal HSRP config. This needs to be HSRPv2 version 1 will not work.

interface Vlan4
no shutdown
no ip redirects
ip address 10.0.4.2/24
no ipv6 redirects
ip router eigrp 2
hsrp version 2
hsrp 4
ip 10.0.4.254
!
hsrp anycast 4 both
switch-id 40
vlan 4
no shutdown
!
The hsrp mac in this situation is 0000.0c9f.f004. Lets see how the same Leaf switch would forward traffic to get to that mac if a server below it had to talk with its default gateway.

anycastroute

We can see the path can take the previous spine switchs on ports 29 and 30 and also the new one that was added in 17.

Some take aways to Anycast HSRP.

-Needs to be version 2 for HSRP
-This is implimented on the Spine where the L3 should be.
-Needs version 6.2.6 on a 7k and atleast version 7.0 on a 6ks
-I am unsure if this will work with a 55xx with a L3 module.

ASR9k with Nv Clustering Part 2.

This is a continuation of ASR9k Nv Part 1. In Part 1 we looked at the general theory and purpose to ASR 9k Nv clustering. The configuration is very simple. This configuration is disruptive.

*Before adding configuration Make sure to license each device for Nv make sure one is for rack 1 and the other rack 0*

On each Device as stated before the IRL have to be 10GB for the data plane. They can be either in bundled links or single links. The control links have to be Gig Interfaces.

interface TenGigE1/0/0/0
nv
edge
interface
!
interface TenGigE1/0/1/0
nv
edge
interface

At this point Connect Each chassis by the RP’s and by the Tengig interface IRLs. Rack 1 should reload. Rack 0 will stay up as soon as Rack 1 reloads its interfaces should appear on Rack 0 creating a Nv cluster.

ASR9k with Nv Clustering Part 1.

Part 1 will explan the general purpose of Nv part 2 will explain the configuration. ASR9k comes with a new feature as of 4.2.1 called Nv which stands for Network Virtualization. This allows two ASR9ks to appear as one logical router. This technology can be referred to as a cross stack ether-channel as it has the same general concept as others ie VPC,VSS,3750 etc. The biggest difference in Nv is that it seperates both the control-plane and the data-plane traffic. Where in other cases like VPC it was highly recommended not to send traffic through the peer-link and with VSS data and control plane mechanisms where on the same 10Gig Interfaces. There are some requirements to running Nv.

1.) Has to be RSP440 CPU need at least one per router
2.) Needs to be newer 10g line cards Either Thor or Trident Enhanced.
3.) EoB RSP links need to be 1g only the only SFP I was able to get working was a GLC-SX-MMD the RSPs are very sensitive.
4.) Needs 4.2.1 or above. At this time of writing 4.3.2 is the latest release and has some Nv fixes.

The cabling is simplistic. The general thought is we are bridging our RSP440s together for control-plane traffic and for data-plane traffic we are bridging 2 10gig line cards from each router for data plane traffic. Each RSP has 2 ports for management. Each RSP needs at least one connection from the RSP on the same subnet. Here is a cabling diagram.

ASRcabling

The thought is simple. Each RSP is connected to each other.

Now the interesting part about Nv is that a RSP is always primary and a RSP is always backup and they live on two different routers. For example. if RSP0 is the active RSP on rack0. The backup will be on Rack1.

The general idea of the data plane is simple. If a packet ends up on rack 0 destined that should be destined for rack 1 it will use the data-plane links between each 9k. A ASR9k can use either equal cost load balancing via Layer 3 or a device can use the same LACP port hashing we know and love. So it is possible to have a packet land on rack 0 destined for rack 1 and use the 10gig links for what they are designed for. Here is a example.

Flowbased

In this example a packet enters the ASR9k with Nv like it is one router. The packet destined for 10.0.0.0/24 lands on the router towards the left but needs to make it to the router to the right. So it is routed over the IRL/10gig link between 9ks.

In Part two I will go over the Nv configuration.

Ospf conditional routing

OSPF conditional routing takes advantage of taking a default route and using that route based off of a route on the local router originating that default route.  Normally conditional routing is used to advertise a default route depending if the originating routers interface facing the service provider is up or not.

Our topology.

OSPFDEFAULT2

Our topology is very simple.  Every router is running OSPF  except for the top two they simply inject a BGP default route.  the CE routers simply have the BGP default route for their rib to pass off the default route into OSPF.  So here is our configuration on the CE routers.

R7

r7#sh ip route 0.0.0.0
Routing entry for 0.0.0.0/0, supernet
Known via “bgp 100”, distance 20, metric 0, candidate default path
Tag 2, type external
Last update from 11.11.11.11 00:25:13 ago
Routing Descriptor Blocks:
* 11.11.11.11, from 11.11.11.11, 00:25:13 ago
Route metric is 0, traffic share count is 1
AS Hops 1
Route tag 2

router bgp 100
no synchronization
bgp log-neighbor-changes
neighbor 11.11.11.11 remote-as 2
neighbor 11.11.11.11 ebgp-multihop 5
neighbor 11.11.11.11 update-source Loopback0
no auto-summary

router ospf 1
log-adjacency-changes
network 7.7.7.7 0.0.0.0 area 0
network 27.27.27.0 0.0.0.255 area 0
default-information originate

R6

R6#sh ip route 0.0.0.0
Routing entry for 0.0.0.0/0, supernet
Known via “bgp 100”, distance 20, metric 0, candidate default path
Tag 2, type external
Last update from 12.12.13.12 00:41:36 ago
Routing Descriptor Blocks:
* 12.12.13.12, from 12.12.13.12, 00:41:36 ago
Route metric is 0, traffic share count is 1
AS Hops 1
Route tag 2

router bgp 100
no synchronization
bgp log-neighbor-changes
neighbor 12.12.13.12 remote-as 2
neighbor 12.12.13.12 ebgp-multihop 5
neighbor 12.12.13.12 update-source Loopback0
no auto-summary

router ospf 1
log-adjacency-changes
network 6.6.6.6 0.0.0.0 area 0
network 36.36.36.0 0.0.0.255 area 0
default-information originate

Okay so now that we got that out of the way lets check out S2 and see what our default route looks like.

S2#sh ip route 0.0.0.0
Routing entry for 0.0.0.0/0, supernet
Known via “ospf 1”, distance 110, metric 1, candidate default path
Tag 1, type extern 2, forward metric 65
Last update from 32.32.32.3 on Vlan32, 00:00:36 ago
Routing Descriptor Blocks:
32.32.32.3, from 6.6.6.6, 00:00:36 ago, via Vlan32
Route metric is 1, traffic share count is 1
Route tag 1
* 26.26.26.2, from 7.7.7.7, 00:00:36 ago, via Vlan26
Route metric is 1, traffic share count is 1
Route tag 1

ugh oh looks like we have two default routes. Lets check an Upstream router.

R3#sh ip route 0.0.0.0
Routing entry for 0.0.0.0/0, supernet
Known via “ospf 1”, distance 110, metric 1, candidate default path
Tag 1, type extern 2, forward metric 64
Last update from 36.36.36.6 on Serial0/0, 00:01:32 ago
Routing Descriptor Blocks:
* 36.36.36.6, from 6.6.6.6, 00:01:32 ago, via Serial0/0
Route metric is 1, traffic share count is 1
Route tag 1

Its going to load balance the default route due to it being a equal metric you can see that in the drawing.  I really do not want to that due to possible asyncrhonous routing issues.  The most simplistic way to get rid of this is to set the cost on one of the interfaces facing upstream to the CE routers.

R3(config)#int s0/0

R3(config-if)#ip ospf cost 1000

This ill give me one router on S2.

Okay so now the good part here.  I can simple attach the locally connected interface as a prefix-list and tie it in with a route-map.  Then use it in ospf after my default-information originate statement so that if that interface goes down it will traverse the other router. For example.

R7 is Primary its primary interface is Se1/0 71.71.71.0/24

R6 is Secondary due to OSPF cost its interface is Se1/0 62.62.62.0/24

So we will create a prefix list on both routers.  I will simply show R7 to begin with.

ip prefix-list 71 seq 5 permit 71.71.71.0/24

route-map default permit 10
match ip address prefix-list 71

Now before I add anything into OSPF lets check S2 to see where our default route is.

Routing entry for 0.0.0.0/0, supernet
Known via “ospf 1”, distance 110, metric 1, candidate default path
Tag 1, type extern 2, forward metric 65
Last update from 26.26.26.2 on Vlan26, 00:04:58 ago
Routing Descriptor Blocks:
* 26.26.26.2, from 7.7.7.7, 00:04:58 ago, via Vlan26
Route metric is 1, traffic share count is 1
Route tag 1

Great we take the path to R7.  Due to our cost setting we will always take that path.  Now lets go ahead and tie a route-map in on R6 and R7 with our matching prefix lists.

Our default route should still stay the same on S2.  Now lets go ahead and Shut down interface Se1/0 so that there is no RIB match for the Subnet 71.71.71.0/24 if everything works right OSPF will stop advertising a default route originating from R7 and start originating from R6.

r7(config)#int se1/0
r7(config-if)#shut

Now going to S2 if everything worked we should see a default route heading out to R6

S2#sh ip route 0.0.0.0
Routing entry for 0.0.0.0/0, supernet
Known via “ospf 1”, distance 110, metric 1000, candidate default path
Tag 1, type extern 2, forward metric 1001
Last update from 32.32.32.3 on Vlan32, 00:00:20 ago
Routing Descriptor Blocks:
* 32.32.32.3, from 6.6.6.6, 00:00:20 ago, via Vlan32
Route metric is 1000, traffic share count is 1
Route tag 1

This is a nice feature, this can be used the most simplistic ways for conditional routing.  This can also be tied into a IP SLA and other creative routing techniques.

How to create Cisco ACE virtual contexts.

A virtual Context within a Cisco ACE module is similar to what a hypervisor is in VMware or what a VDC is in within Nexus.  Virtual Contexts are nice for all aspects of load balancing since it gives the customer or department a logical seperation for a variety of reasons.  I am one who likes the ACE appliances and ACE blades.  This is for a 4710 appliance.  A blade is very similar where instead of doing your trunking to the port channel to the appliance one would simply have to create the svcgroups in the running config of a 6500.  Here is our very simple Diagram.

ACE1

I will use VLAN 5 for Management.  Every context will simply receive a management IP via VLAN 5.  VLANs 10,20,30 Will be production or Load balancing VLANs.  100,200,300 will be setup as whats called Fault Tolerance VLANs.  These VLANs simply work in the Context to sync the running config back and forth between each ace device.  These do not have to be routable so you can simply pick and layer 3 subnet that will run back and forth between your switches and ACEs. What is extremely nice about the ACE’s is that you can have a ACE completely tank and the other one will take every session without skipping a beat you can also have the ace track who the hsrp primary is for that vlan in its context to be the primary for the context… yes you can mix and match contexts.  So you can have vlan 10 be the primary for the ace context on the left and vlan 20 the primary for the ace on the right.  I like the ACE devices its a shame that they are going to be EOS here soon.

So the first thing you will want to do is trunk your vlans over from your switch via the port channel on lets say a 6500.  This has to be done on both switches.  Obviously each VLAN has to be allowed via the port channel between the 6500s as well.

interface Port-channel1
switchport
switchport trunk encapsulation dot1q
switchport trunk allowed vlan 5,10,20,30,100,200,300,500
switchport mode trunk
no ip address
mls qos trust dscp
end

 

On Each ACE.

interface gigabitEthernet 1/1
channel-group 1
no shutdown
interface gigabitEthernet 1/2
channel-group 1
no shutdown
interface gigabitEthernet 1/3
channel-group 1
no shutdown
interface gigabitEthernet 1/4
channel-group 1
no shutdown

There it is all of are VLANs are not trunked over.  Next when you first install your ACE you will be dropped into the ADMIN context.  This is where all the magic happens.  Where all the new contexts are created.  Now keep in mind that everything is blocked by default since the ACE shares a similar platform to the FWSM firewall.  So with your management VLAN you will have to tie in a class-map and policy map to allow management traffic.. telnet,ssh,icmp etc.

class-map type management match-any REMOTE_ACCESS_CLASS
2 match protocol icmp any
3 match protocol telnet any
4 match protocol ssh any
5 match protocol snmp any
policy-map type management first-match MGMT-POLICY
class REMOTE_ACCESS_CLASS
permit

interface vlan 5
description MANAGEMENT-VLAN
ip address 10.0.0.1 255.255.255.0
peer ip address 10.0.0.2 255.255.255.0 ( This is needed for FT)
service-policy input MGMT-POLICY
no shutdown

Now to the contexts!  I am going to simply create VLAN 10

Context VLAN10

allocate-interface 10

allocate-interface 5

allocate-interface 100

Now I should be able to see my contexts to switch to VLAN 10

TESTACE/Admin# changeto ?
Admin
VLAN10

Next for FT for my management VLAN in the ADMIN context.

ft interface vlan 500
ip address 192.168.1.1 255.255.255.0
peer ip address 192.168.1.2 255.255.255.0
no shutdown

Next for FT for my VLAN 10 contexts.

ft group 2
peer 1
priority 150
peer priority 110
associate-context VLAN10
inservice

Now if I wanted to create a FT group 3 for vlan 20 I could have mix and match priorities making the other the Primary ace.   After doing all the leg work on the primary ace once I put the FT interface and trunk all my VLANs over to the secondary Ace I should get the exact same configuration as well as contexts on the other 4710.