Re: One internal network, VPN, 2 PIX

In article <JZKdnax9g9aqBIzZRVn-qg@xxxxxxxxxxxx>,
Mike W. <mike@xxxxxxxx> wrote:
I have an existing network (let's call it Corporate), behind a PIX 506. The
internal IP addressing is

Here's what I want to do:

Add a second PIX (501) to handle VPN client sessions to the "Corp" network.
This is all that this device will do.

Is there a particular reason to use a seperate device instead of
putting the VPN onto the 506 itself? For example, for policy reasons,
or because the 506 is the "production" firewall and you can't clear
enough testing time? Or is there a need for the VPN clients to be able
to access resources "outside" the 506 by connecting to the 501?

How should I go about setting this up? Do I assign it an internal address
in the same subnet? I've tried that and I'm able to connect remotely, but
all I can ping is the internal interface on the PIX that I'm VPN'ing in to.
Do I need to add ACL's into the Corp PIX to allow the VPN traffic (I already
have them in the 501 VPN Pix)

If you are able to connect far enough to ping the inside interface
of the PIX 501, then the 506 already has enough pinholes to allow the
VPN traffic you want.

I've got the clients being assigned IP addresses in the range,
and split tunneling IS working.

Do I have to add a "route inside" statement, or something like that?

The clue that I see is that you are able to ping the PIX 501 *inside*
interface through the VPN. You can normally only ping the "closest"
PIX interface, which would be the "outside" interface. The only exception
to this rule is if you apply a crypto map to the inside interface
and mark the inside as the "management interface" -- but if you do
that, then that tunnel cannot be used to access anything other than the
PIX itself (i.e., to "manage" the PIX.)

But to point out the obvious on the off-chance that you happened
to overlook it:

When the VPN clients with their 10.99.1.* IPs attempt to contact
the 192.168.200.* inside hosts, the inside hosts are going to
reply, and the 10.99.1.* packets are going to be routed to the
next hop that those inside hosts know about. Unless you reconfigured
those hosts, or went to a bunch of trouble with OSPF, that next
hop would be the 506. The 506 doesn't know anything special about
10.99.1.* destinations, so it will send the packets outside (e.g., UDP),
or block them (if you have the standard denies to RFC 1918 private
addresses), or drop them (e.g. TCP packets would be unexpected SYN ACK).

In order to avoid this and yet not have to reconfigure your 506,
you have to put the 501 outside of the 506 and open
appropriate holes for the decapsulated packets from the 501;
or you need to have the 501 NAT the *source* 10.99.1.* IPs into
*source* 192.168.200.* IPs (and let the 501 proxy-arp those IPs
so the interior hosts send responses to the 501); or you need to
add an interior router and use that as the gateway for your
internal hosts.

For incoming packets, the PIX 501 normally [de-] NAT's *destinations*
leaving the source IPs alone -- this is the general rule when
going from a lower security interface to a higher security interface.
You can configure it to NAT incoming *source* IPs, but it gets
a bit ugly to configure.

What I did in a similar situation was to connect up the interior PIX
*backwards*. The "inside" interface is the one I applied the
crypto map to, and which receives the VPN traffic; it was assigned
a public IP on a small subnet fragment, and the exterior PIX was set to
allow all IPSec traffic through to that public IP. The "outside"
interface of the interior PIX was then put into an IP range
that the main LAN could reach without going through the exterior
firewall; in my case, it was another of our subnets (we have an
interior router), but in your case it would be 192.168.200.* .
Appropriate nat and static's were set up.

The VPN traffic comes in to the exterior firewall, which passes it
through unchanged to the interior PIX *inside* interface; an interior
router really helps get it there, but you could use a "logical
interface" (801.Q VLAN) on the 506 to the same effect. The interior
PIX decapsulates the traffic, and then since it sees that the traffic
is destined for a lower-security interface (the "outside" of the
interior PIX), it uses standard nat/static logic to change the -source-
IP into the outside IP range. The decapsulated packets that reach the
interior hosts have the altered source IPs.

When the interior hosts reply, they do so to the altered IPs, which you
have arranged to be in the same IP range as those interior hosts, so
the interior hosts arp for the destination directly instead of sending
the packet off to the gateway. The interior PIX proxy-arps those IPs on
its outside interface, so it receives the replies on its outside
interface. The interior PIX sees that it has an active translation for
that dataflow, and it sees the packet as being one going from a lower
security interface to a higher security interface, so it de-NATs
the -destination- IP... changing it back to the IP of the VPN
client. The PIX then notices that there is an active VPN for
the client IP, so it encapsulates the traffic and sends it out
to the appropriate peer address, using the interior PIX inside
interface to do so. The exterior PIX sees these encapsulated
packets, and sees that the destination is outside the exterior
PIX, so it lets the encapsulated packets head out towards the
appropriate peer public IP.

To get this to work can require an "ip route inside" statement or three --
if you miss those, then you get the symptoms that Phase 1 negotiations
complete, but Phase 2 negotiations generate "no route to host"
log messages.

This arrangement might seem a bit strange, but it results
in a much cleaner interior PIX configuration than you would
have if you tried to do NAT'ing of the *source* IPs on packets
travelling towards the inside interface.