Recently I began planning for a centralized deployment of OCS that would sit in one of 6 offices for a client we’ll call Acme Accounts. Acme will start out using OCS for IM, P2P audio/video and Live Meeting; eventually OCS will grow into a full blown enterprise voice solution. The first big hurdle for us when designing Acme’s OCS implementation was their “WAN”, instead of typical MPLS or point to point connections, Acme uses its internet connections and ASA firewalls in each location to create VPNs that allow traffic to route between sites. The latency created by this solution has never been an issue, but voice and video were never running over this connection either. Here’s a look at a diagram of the environment:
To make things a little more complicated, the internal domain name (acmeaccounts.com) matched the external name and all DNS in each site is active directory integrated (ADI) as it should be. With ADI zones, all DNS servers share the same zones, so we can’t have different records in each site to use public IP’s for OCS instead of private IPs to bypass the VPN.
Since the VPN between sites introduces extra latency we wanted to route the traffic for OCS straight through the internet and not through the site to site tunnels. This would mean users in each office would have the same experience as users on the internet. Not a perfect scenario, but with 10 to 20 mb connections in each location and not a ton of use, this is the best solution until the WAN is put in place.
On to the details… Since the edge server sits out in a DMZ that has a 100 MB connection to the HQ ASA (but isn’t behind it) I felt comfortable that letting the HQ users connect through the firewall to the edge, instead of straight to the front end was an acceptable solution. The big benefit here is that all 5 other sites (most with as many or more users than HQ) can now use the public IP to connect and not have their traffic enter the VPN.
Here’s a look at the original DNS records:
Pool FQDD = Pool.acmeaccounts.com > 192.168.1.100
Autoconfig-SRV = _sipinternaltls._tcp.acmeaccounts.com > Pool.acmeaccounts.com
Here’s a look at how I configured the DNS records to send all the traffic to public IPs instead of private:
Audio/Video = av.acmeaccounts.com > 18.104.22.168
Access Edge = sip. acmeaccounts.com > 22.214.171.124
Meeting = meet. acmeaccounts.com > 126.96.36.199
Public Farm FQDN = abs. acmeaccounts.com > 188.8.131.52
Autoconfig-SRV = _sip._tls. acmeaccounts.com > sip.acmeaccounts.com
Here’s a diagram of what the environment looks like with OCS, now the clients in HQ actually go out the main firewall and over to the edge to get to the Front End server (follow the arrows from the MOC symbol to the OCSEdge):
Although it’s not a perfect solution it allows the client to have better performance for Live Meetings/conferences. Peer to Peer traffic may still make its way over the VPNs depending on the connectivity checks, but from our testing, usually it didn’t. Once the WAN is in place and we’re ready to start rolling out voice, we’ll shift the records back to going straight to the front end server.