All questions and answers:
Firewall
Tell about all TCP flags you can
think of
Syn-
initiates the connection
Ack-
acknowledges the received connection.
Fin- closes
the connection
Rst- aborts
a connection in response to an error
Psh-
Buffers allow
for more efficient transfer of data when sending more than one maximum segment
size (MSS) worth of data (for example, transferring a large file). However,
large buffers do more harm than good when dealing with real-time applications
which require that data be transmitted as quickly as possible. Consider what
would happen to a Telnet session, for instance, if TCP waited until there was
enough data to fill a packet before it would send one: You would have to type
over a thousand characters before the first packet would make it to the remote
device. Not very useful.
This is
where the PSH flag comes in. The socket that TCP makes available at the session
level can be written to by the application with the option of
"pushing" data out immediately, rather than waiting for additional
data to enter the buffer. When this happens, the PSH flag in the outgoing TCP
packet is set to 1 (on). Upon receiving a packet with the PSH flag set, the
other side of the connection knows to immediately forward the segment up to the
application. To summarize, TCP's push capability accomplishes two things:
•
The sending
application informs TCP that data should be sent immediately.
•
The PSH flag
in the TCP header informs the receiving host that the data should be pushed up
to the receiving application immediately.
URG-
The URG flag
is used to inform a receiving station that certain data within a segment is
urgent and should be prioritized. If the URG flag is set, the receiving station
evaluates the urgent pointer, a 16-bit field in the TCP header. This pointer
indicates how much of the data in the segment, counting from the first byte, is
urgent.
0.What is
stateful inspection & Packet filtering. What’s the difference ?
A stateful firewall is aware of the connections that pass
through it.
Packet firewalls, on the other hand, don’t look at the
state of connections, but just at the
packets themselves.
A good example of a packet filtering firewall is the
extended access control lists
(ACLs) that Cisco IOS routers use. With these ACLs, the
router will look only at the following
information in each individual packet:
Source IP address
Destination IP address
IP protocol
IP protocol information, like TCP/UDP port numbers or ICMP
message types
At first glance, because the information is the same that
a stateful firewall examines,
it looks like a packet filtering firewall performs the
same functions as a stateful firewall.
However, a Cisco IOS router using ACLs doesn’t look at
whether this is a connection
setup request, an existing connection, or a connection
teardown request—it just filters
individual packets as they flow through the interface.
Some people might argue that the established keyword with
Cisco’s extended
ACLs implements the stateful function found in a stateful
firewall; however, this keyword
only looks for certain TCP flags like FIN, ACK, RST, and
others in the TCP segment
headers and allows them through. Again, the router is not
looking at the state of the connection
itself when using extended ACLs, just information found
inside the layer 3 and
layer 4 headers.
1.What is Adaptive security
algorithm?
Adaptive
Security Algorithm (ASA) is a Cisco algorithm that manages traffic flow through
PIX firewalls. It inspects packets and creates remembered entries which are
then referenced when traffic attempts to flow from lower- to higher-security
areas. Only packets that match entries are allowed through.
how the
stateful-inspection and application intelligence works in the Security
Appliance. Conceptually, three basic operational functions are performed:
•
Access lists: Controlling
network access based on specific networks, hosts, and services (TCP/UDP port
numbers).
•
Connections (xlate and conn tables): Maintaining state information for each connection. This
information is used by the Adaptive Security Algorithm and cut-through proxy to
effectively forward traffic within established connections.
Inspection Engine: Perform
stateful inspection coupled with application-level inspection functions. These
inspection rule sets are predefined to validate application compliance as per
RFC and other standards and cannot be altered.
2.How would the firewall treat a
TCP and UDP packets when it crosses the firewall ?
For those
TCP traffic, all incoming TCP traffic are inspected by Cisco ASA/PIX Firewall
to make sure that there will be a 3-way handshake per TCP mechanism to complete
TCP transaction. The firewall will drop any incomplete TCP transaction for
protection from possible TCP-based attack.
As example, the firewall keeps
TCP session as part of the TCP 3-way handshake protection mechanism where there
is some kind of hold timer. The firewall expects to receive responses from
server within the hold timer interval, which the timer will expire. At the time
the firewall does not receive the server response when the timer expires, the
firewall drops any related TCP session and also drops "late" server
response.
Another example is having the firewall drops TCP packets when the
TCP client keeps sending TCP synchronization (SYN) packet or sending TCP acknowledge
(ACK) packet without sending TCP SYN packet first. In this situation, the
firewall drops the TCP SYN and TCP ACK accordingly.
There is also a TCP Initial
Sequence Number (ISN) randomization protection feature which by default
randomizing TCP sequence number to negotiate between client and server in order
to provide TCP Sequence Prediction Attacks protection.
One optional feature is setting
maximum number of simultaneous TCP and UDP connections through the firewall for
the entire subnet. The default is 0, which means unlimited connections and the
firewall lets the server determine the number.
Another optional feature is
specifying the maximum number of embryonic connections per host. An embryonic
connection is a connection request that has not finished the necessary
handshake between source and destination. Set a small value for slower systems,
and a higher value for faster systems. The default is 0, which means unlimited
embryonic connections.
The embryonic connection limit lets you prevent a type of attack
where processes are started without being completed. When the embryonic limit
is surpassed, the TCP intercept feature intercepts TCP SYN packets from clients
to servers on a higher security level. The software establishes a connection
with the client on behalf of the destination server, and if successful,
establishes the connection with the server on behalf of the client and combines
the two half-connections together transparently. Thus, connection attempts from
unreachable hosts never reach the server. The PIX firewall and ASA accomplish
TCP intercept functionality using SYN cookies.
Basically
each time an ASA receives a new connection being UDP it will record the source
IP, source port, destination IP and destination port. That information will be
holded into the stateful table of the ASA and a reply for that packet will be
expected for a specific amount of time (timeout).
3.Tell me about the different
types of Nat?
1) Dynamic NAT
Translates a
group of real addresses to a pool of mapped addresses that are routable on the
destination network. The mapped pool may include fewer addresses than the real
group. When a host you want to translate accesses the destination network, the
security appliance assigns the host an IP address from the mapped pool. The
translation is added only when the real host initiates the connection. The
translation is in place only for the duration of the connection, and a given
user does not keep the same IP address after the translation times out.
2) PAT
PAT translates
multiple real addresses to a single mapped IP address. Specifically, the
security appliance translates the real address and source port (real socket) to
the mapped address and a unique port above 1024 (mapped socket). Each
connection requires a separate translation, because the source port differs for
each connection. For example, 10.1.1.1:1025 requires a separate translation
from 10.1.1.1:1026. PAT lets you use a single mapped address, thus conserving
routable addresses.
3) Static NAT
Static NAT
creates a fixed translation of real address (es) to mapped address (es). With
dynamic NAT and PAT, each host uses a different address or port for each
subsequent translation. Because the mapped address is the same for each
consecutive connection with static NAT, and a persistent translation rule
exists, static NAT allows hosts on the destination network to initiate traffic
to a translated host (if an access list exists that allows it).
The main
difference between dynamic NAT and a range of addresses for static NAT is that
static NAT allows a remote host to initiate a connection to a translated
host (if an access list exists that allows it), while dynamic NAT does not. You
also need an equal number of mapped addresses as real addresses with
static NAT.
4) Static PAT
Static PAT
is the same as static NAT, except that it lets you specify the protocol (TCP or
UDP) and port for the real and mapped addresses.
This feature
lets you identify the same mapped address across many different static
statements, provided the port is different for each statement. You cannot use
the same mapped address for multiple static NAT statements.
For
applications that require inspection for secondary channels (for example, FTP
and VoIP), the security appliance automatically translates the secondary ports.
5) Bypassing
NAT
You can
configure traffic to bypass NAT using one of three methods. All methods achieve
compatibility with inspection engines. However, each method offers slightly
different capabilities, as follows:
•Identity NAT (nat 0 command)—When
you configure identity NAT (which is similar to dynamic NAT), you do not limit
translation for a host on specific interfaces; you must use identity NAT for
connections through all interfaces. Therefore, you cannot choose to perform normal
translation on real addresses when you access interface A, but use identity NAT
when accessing interface B. Regular dynamic NAT, on the other hand, lets you
specify a particular interface on which to translate the addresses. Make sure
that the real addresses for which you use identity NAT are routable on all
networks that are available according to your access lists.
For identity NAT, even though the
mapped address is the same as the real address, you cannot initiate a connection from the
outside to the inside (even if the interface access list allows it). Use static
identity NAT or NAT exemption for this functionality.
•Static identity NAT (static
command)—Static identity NAT lets you specify the interface on which
you want to allow the real addresses to appear, so you can use identity NAT
when you access interface A, and use regular translation when you access
interface B. Static identity NAT also lets you use policy NAT, which identifies
the real and destination addresses when determining the real addresses to
translate (see the "Policy NAT" section for more information
about policy NAT). For example, you can use static identity NAT for an inside
address when it accesses the outside interface and the destination is server A,
but use a normal translation when accessing the outside server B.
•NAT exemption (nat 0
access-list command)—NAT exemption allows both translated and
remote hosts to initiate connections. Like identity NAT, you do not limit
translation for a host on specific interfaces; you must use NAT exemption for
connections through all interfaces. However, NAT exemption does let you
specify the real and destination addresses when determining the real addresses
to translate (similar to policy NAT), so you have greater control using NAT exemption.
However unlike policy NAT, NAT exemption does not consider the ports in
the access list. NAT exemption also does not support connection settings,
such as maximum TCP connections.
6) Policy
NAT
Policy NAT
lets you identify real addresses for address translation by specifying the
source and destination addresses in an extended access list. You can also
optionally specify the source and destination ports. Regular NAT can only
consider the source addresses, and not the destination. For example, with policy
NAT, you can translate the real address to mapped address A when it accesses
server A, but translate the real address to mapped address B when it accesses
server B.
Order of NAT Commands Used to Match Real Addresses
The security
appliance matches real addresses to NAT commands in the following order:
1. NAT exemption (nat 0
access-list)—In order, until the first match. Identity NAT is not
included in this category; it is included in the regular static NAT or regular
NAT category. We do not recommend overlapping addresses in NAT exemption
statements because unexpected results can occur.
2. Static NAT and Static PAT (regular and policy) (static)—In order, until the first
match. Static identity NAT is included in this category.
3. Policy dynamic NAT (nat access-list)—In
order, until the first match. Overlapping addresses are allowed.
4. Regular dynamic NAT (nat)—Best
match. Regular identity NAT is included in this category. The order of the NAT
commands does not matter; the NAT statement
that best matches the real address is used. For example, you can create a
general statement to translate all addresses (0.0.0.0) on an interface. If you
want to translate a subset of your network (10.1.1.1) to a different address,
then you can create a statement to translate only 10.1.1.1. When 10.1.1.1 makes
a connection, the specific statement for 10.1.1.1 is used because it matches
the real address best. We do not recommend using overlapping statements; they
use more memory and can slow the performance of the security appliance.
8.3 and later:
NAT Types
You can
implement NAT using the following methods:
•Static NAT—A consistent mapping
between a real and mapped IP address. Allows bidirectional traffic initiation.
•Dynamic NAT—A group of real IP
addresses are mapped to a (usually smaller) group of mapped IP addresses, on a
first come, first served basis. Only the real host can initiate traffic.
•Dynamic Port Address Translation
(PAT)—A group of real IP addresses are mapped to a single IP address using a
unique source port of that IP address.
•Identity NAT—Static NAT lets you
translate a real address to itself, essentially bypassing NAT. You might want
to configure NAT this way when you want to translate a large group of
addresses, but then want to exempt a smaller subset of addresses.
Order of NAT
Rules.
–Network object NAT—Automatically
ordered in the NAT table.
–Twice NAT—Manually ordered in the
NAT table (before or after network object NAT rules).
What is NAT-CONTROL?
NAT control
requires that packets traversing from an inside interface to an outside
interface match a NAT rule; for any host on the inside network to access a host
on the outside network, you must configure NAT to translate the inside host
address.
Interfaces
at the same security level are not required to use NAT to communicate. However,
if you configure dynamic NAT or PAT on a same security interface, then all
traffic from the interface to a same security interface or an outside interface
must match a NAT rule.
5.What are the troubleshooting
mechanisms to be followed in cisco firewalls?
??
Packet tracer output, different
lookups
ASA1#
packet-tracer input inside tcp 10.1.101.1
4532 192.168.1.2 80
Phase: 1
Type:
ROUTE-LOOKUP
Subtype:
input
Result:
ALLOW
Config:
Additional
Information:
in 192.168.1.0 255.255.255.0 outside
Phase: 2
Type:
IP-OPTIONS
Subtype:
Result:
ALLOW
Config:
Additional
Information:
Phase: 3
Type: NAT
Subtype:
Result:
ALLOW
Config:
object
network inside
nat (inside,outside) dynamic interface
Additional
Information:
Dynamic
translate 10.1.101.1/4532 to 192.168.1.10/37703
Phase: 4
Type:
IP-OPTIONS
Subtype:
Result:
ALLOW
Config:
Additional
Information:
Phase: 5
Type:
FLOW-CREATION
Subtype:
Result:
ALLOW
Config:
Additional
Information:
New flow
created with id 5, packet dispatched to next module
Result:
input-interface:
inside
input-status:
up
input-line-status:
up
output-interface:
outside
output-status:
up
output-line-status:
up
Action:
allow
6.What is stateful failover ?
(command to enable failover)
When stateful
failover is enabled, the active unit continually passes per-connection state
information to the standby unit. After a failover occurs, the same connection
information is available at the new active unit. Supported end-user
applications are not required to reconnect to keep the same communication
session.
The state
information passed to the standby unit includes these:
•
The NAT
translation table
•
The TCP
connection states
•
The UDP
connection states
•
The ARP
table
•
The Layer 2
bridge table (when it runs in the transparent firewall mode)
•
The HTTP
connection states (if HTTP replication is enabled)
•
The ISAKMP
and IPSec SA table
•
The GTP PDP
connection database
The
information that is not passed to the standby unit when stateful failover is
enabled includes these:
•
The HTTP
connection table (unless HTTP replication is enabled)
•
The user
authentication (uauth) table
•
The routing
tables
State
information for security service modules
7.what is transparent firewall ?
Transparent
Firewall Features
Traditionally,
a firewall is a routed hop and acts as a default gateway for hosts that connect
to one of its screened subnets. A transparent firewall, on the other hand, is a
Layer 2 firewall that acts like a "bump in the wire," or a
"stealth firewall," and is not seen as a router hop to connected
devices. The security appliance connects the same network on its inside and
outside ports. Because the firewall is not a routed hop, you can easily
introduce a transparent firewall into an existing network; IP readdressing is
unnecessary.
Bridge
Groups
If you do
not want the overhead of security contexts, or want to maximize your use of
security contexts, you can group interfaces together in a bridge group, and
then configure multiple bridge groups, one for each network. Bridge group
traffic is isolated from other bridge groups; traffic is not routed to another
bridge group within the ASA, and traffic must exit the ASA before it is routed
by an external router back to another bridge group in the ASA. Although the
bridging functions are separate for each bridge group, many other functions are
shared between all bridge groups. For example, all bridge groups share a syslog
server or AAA server configuration. For complete security policy separation,
use security contexts with one bridge group in each context.
Allowing
Layer 3 Traffic
•Unicast IPv4 and IPv6 traffic is
allowed through the transparent firewall automatically from a higher security
interface to a lower security interface, without an ACL.
•ARPs are allowed through the
transparent firewall in both directions without an access list. ARP traffic can
be controlled by ARP inspection.
Allowed MAC
Addresses
The
following destination MAC addresses are allowed through the transparent
firewall. Any MAC address not on this list is dropped.
•TRUE broadcast destination MAC
address equal to FFFF.FFFF.FFFF
•IPv4 multicast MAC addresses from
0100.5E00.0000 to 0100.5EFE.FFFF
•IPv6 multicast MAC addresses from
3333.0000.0000 to 3333.FFFF.FFFF
•BPDU multicast address equal to
0100.0CCC.CCCD
•AppleTalk multicast MAC addresses
from 0900.0700.0000 to 0900.07FF.FFFF
Passing Traffic
Not Allowed in Routed Mode
In routed
mode, some types of traffic cannot pass through the ASA even if you allow it in
an access list. The transparent firewall, however, can allow almost any traffic
through using either an extended access list (for IP traffic) or an EtherType
access list (for non-IP traffic).
Non-IP
traffic (for example AppleTalk, IPX, BPDUs, and MPLS) can be configured to go
through using an EtherType access list.
Note The transparent mode ASA does not pass CDP packets packets, or any
packets that do not have a valid EtherType greater than or equal to 0x600. An
exception is made for BPDUs and IS-IS, which are supported.
Passing
Traffic For Routed-Mode Features
For features
that are not directly supported on the transparent firewall, you can allow
traffic to pass through so that upstream and downstream routers can support the
functionality. For example, by using an extended access list, you can
allow DHCP traffic (instead of the unsupported DHCP relay feature) or multicast
traffic such as that created by IP/TV. You can also establish routing protocol
adjacencies through a transparent firewall; you can allow OSPF, RIP, EIGRP, or
BGP traffic through based on an extended access list. Likewise, protocols like
HSRP or VRRP can pass through the ASA.
BPDU
Handling
To prevent
loops using the Spanning Tree Protocol, BPDUs are passed by default. To block
BPDUs, you need to configure an EtherType access list to deny them. If you are
using failover, you might want to block BPDUs to prevent the switch port from
going into a blocking state when the topology changes. See the "Transparent
Firewall Mode Requirements" section for more information.
MAC Address
vs. Route Lookups
When the ASA
runs in transparent mode, the outgoing interface of a packet is determined by
performing a MAC address lookup instead of a route lookup.
Route
lookups, however, are necessary for the following traffic types:
•Traffic originating on the
ASA—For example, if your syslog server is located on a remote network, you must
use a static route so the ASA can reach that subnet.
•Traffic that is at least one hop
away from the ASA with NAT enabled—The ASA needs to perform a route lookup to
find the next hop gateway; you need to add a static route on the ASA for the
real host address.
•Voice over IP (VoIP) and DNS
traffic with inspection enabled, and the endpoint is at least one hop away from
the ASA—For example, if you use the transparent firewall between a CCM and an
H.323 gateway, and there is a router between the transparent firewall and the
H.323 gateway, then you need to add a static route on the ASA for the H.323
gateway for successful call completion. If you enable NAT for the inspected
traffic, a static route is required to determine the egress interface for the
real host address that is embedded in the packet. Affected applications
include:
–CTIQBE
–DNS
–GTP
–H.323
–MGCP
–RTSP
–SIP
–Skinny (SCCP)
ARP
Inspection
By default,
all ARP packets are allowed through the ASA. You can control the flow of ARP
packets by enabling ARP inspection.
When you
enable ARP inspection, the ASA compares the MAC address, IP address, and source
interface in all ARP packets to static entries in the ARP table, and takes
the following actions:
•If the IP address, MAC address,
and source interface match an ARP entry, the packet is passed through.
•If there is a mismatch between
the MAC address, the IP address, or the interface, then the ASA drops the
packet.
•If the ARP packet does not match
any entries in the static ARP table, then you can set the ASA to either forward
the packet out all interfaces (flood), or to drop the packet.
Note The dedicated management interface, if present, never floods
packets even if this parameter is set to flood.
ARP
inspection prevents malicious users from impersonating other hosts or routers
(known as ARP spoofing). ARP spoofing can enable a
"man-in-the-middle" attack. For example, a host sends an
ARP request to the gateway router; the gateway router responds with the
gateway router MAC address. The attacker, however, sends another ARP response
to the host with the attacker MAC address instead of the router MAC address.
The attacker can now intercept all the host traffic before forwarding it on to
the router.
ARP
inspection ensures that an attacker cannot send an ARP response with the
attacker MAC address, so long as the correct MAC address and the
associated IP address are in the static ARP table.
8.how to check the connections
and nat translations?
hostname# show
conn detail
54 in use,
123 most used
Flags: A -
awaiting inside ACK to SYN, a - awaiting outside ACK to SYN,
B
- initial SYN from outside, b - TCP state-bypass or nailed, C - CTIQBE media,
D - DNS, d - dump, E - outside back
connection, F - outside FIN, f - inside FIN,
G - group, g - MGCP, H - H.323, h -
H.225.0, I - inbound data,
i - incomplete, J - GTP, j - GTP data, K
- GTP t3-response
k - Skinny media, M - SMTP data, m - SIP
media, n - GUP
O - outbound data, P - inside back
connection, p - Phone-proxy TFTP connection,
q - SQL*Net data, R - outside
acknowledged FIN,
R - UDP SUNRPC, r - inside acknowledged
FIN, S - awaiting inside SYN,
s - awaiting outside SYN, T - SIP, t -
SIP transient, U - up,
V - VPN orphan, W - WAAS,
X - inspected by service module
TCP
outside:10.10.49.10/23 inside:10.1.1.15/1026,
flags
UIO, idle 39s, uptime 1D19h, timeout 1h0m, bytes 1940435
UDP
outside:10.10.49.10/31649 inside:10.1.1.15/1028,
flags
dD, idle 39s, uptime 1D19h, timeout 1h0m, bytes 1940435
TCP
dmz:10.10.10.50/50026 inside:192.168.1.22/5060,
flags UTIOB, idle 39s, uptime 1D19h,
timeout 1h0m, bytes 1940435
TCP
dmz:10.10.10.50/49764 inside:192.168.1.21/5060,
flags UTIOB, idle 56s, uptime 1D19h,
timeout 1h0m, bytes 2328346
hostname# show
xlate
5 in use, 5
most used
Flags: D -
DNS, i - dynamic, r - portmap, s - static, I - identity, T - twice
e
- extended
NAT from
any:10.90.67.2 to any:10.9.1.0/24
flags idle 277:05:26 timeout 0:00:00
NAT from
any:10.1.1.0/24 to any:172.16.1.0/24
flags idle 277:05:26 timeout 0:00:00
NAT from
any:10.90.67.2 to any:10.86.94.0
flags idle 277:05:26 timeout 0:00:00
NAT from
any:10.9.0.9, 10.9.0.10/31, 10.9.0.12/30,
10.9.0.16/28, 10.9.0.32/29, 10.9.0.40/30,
10.9.0.44/31 to any:0.0.0.0
flags idle 277:05:26 timeout 0:00:00
NAT from
any:10.1.1.0/24 to any:172.16.1.0/24
flags idle 277:05:14 timeout 0:00:00
9.How would you troubleshoot the
high utilization issue in firewall ?
1))speed and duplex settings.
A speed or
duplex mismatch is most frequently revealed when error counters on the
interfaces in question increase. The most common errors are frame, cyclic
redundancy checks (CRCs), and runts. If these values increment on your
interface, either a speed/duplex mismatch or a cabling issue occurs. You must
resolve this issue before you continue.
Example
interface
ethernet0 "outside" is up, line protocol is up
Hardware is i82559 ethernet, address is
00d0.b78f.d579
IP address 192.168.1.1, subnet mask
255.255.255.0
MTU 1500 bytes, BW 100000 Kbit half duplex
7594 packets input, 2683406 bytes, 0 no
buffer
Received 83 broadcasts, 153 runts,
0 giants
378 input errors, 106 CRC, 272 frame,
0 overrun, 0 ignored, 0 abort
2997 packets output, 817123 bytes, 0
underruns
0 output errors, 251 collisions, 0
interface resets
0 babbles, 150 late collisions, 110
deferred
2)) CPU Utilization
If you
noticed the CPU utilization is high, follow these steps in order to
troubleshoot:
•
Verify that
the connection count in show xlate count is low.
•
Verify that
the memory block is normal.
•
Verify that
the number of ACLs is higher.
•
Issue the show
memory detail command, and verify that the memory used by the PIX is normal
utilization.
Verify that
the counts in show processes cpu-hog and show processes memory
are normal.
•
Note: Cisco recommends that you enable
the ip
verify reverse-path interface command on all the interfaces as
it will drop packets that do not have a valid source address, which results in
less CPU usage. This applies to FWSM facing high CPU issues.
•
Another
reason for high CPU usage can be due to too many multicast routes. Issue the show
mroute command in order to check if PIX/ASA receives too many
multicast routes.
Use the show
local-host command in order to see if the network experiences a
denial-of-service attack, which can indicate a virus attack in the network.
3))High
Memory Utilization
Here are
some possible causes and resolutions for high memory utilization:
•
Event
logging: Event
logging can consume large amounts of memory. In order to resolve this issue,
install and log all events to an external server, such as a syslog server.
•
Memory
Leakage: A known
issue in the security appliance software can lead to high memory consumption.
In order to resolve this issue, upgrade the security appliance software.
•
Debugging
Enabled: Debugging
can consume large amounts of memory. In order to resolve this issue, disable
debugging with the undebug all command.
•
Blocking
Ports: Blocking
ports on the outside interface of a security appliance cause the security
appliance to consume high amounts of memory to block the packets through the
specified ports. In order to resolve this issue, block the offending traffic at
the ISP end.
Threat-Detection: The threat detection feature
consists of different levels of statistics gathering for various threats, as
well as scanning threat detection, which determines when a host is performing a
scan. Turn off this feature to consume less memory.
4))show
perfmon
The show
perfmon command is used to monitor the amount and types of traffic that the
PIX inspects. This command is the only way to determine the number of
translations (xlates) and connections (conn) per second. Connections are
further broken down into TCP and User Datagram Protocol (UDP) connections. See
Description of Output for descriptions of the output that this command
generates.
Example
PERFMON
STATS Current Average
Xlates 18/s 19/s
Connections 75/s 79/s
TCP
Conns 44/s 49/s
UDP
Conns 31/s 30/s
URL
Access 27/s 30/s
URL Server
Req 0/s 0/s
TCP
Fixup 1323/s 1413/s
TCPIntercept 0/s 0/s
HTTP
Fixup 923/s 935/s
FTP
Fixup 4/s 2/s
AAA
Authen 0/s
0/s
AAA
Author 0/s 0/s
AAA
Account 0/s 0/s
show blocks
Along with
the show
cpu usage command, you can use the show
blocks command in order to determine whether the ASA is
overloaded.
Packet-Processing
Blocks (1550 and 16384 Bytes)
When it comes
into the ASA interface, a packet is placed on the input interface queue, passed
up to the OS, and placed in a block. For Ethernet packets, the 1550-byte blocks
are used; if the packet comes in on a 66 MHz Gigabit Ethernet card, the
16384-byte blocks are used. The ASA determines whether the packet is permitted
or denied based on the Adaptive Security Algorithm (ASA) and processes the
packet through to the output queue on the outbound interface. If the ASA cannot
support the traffic load, the number of available 1550-byte blocks (or
16384-byte blocks for 66 MHz GE) hovers close to 0 (as shown in the CNT column
of the command output). When the CNT column hits zero, the ASA attempts to
allocate more blocks, up to a maximum of 8192. If no more blocks are available,
the ASA drops the packet.
Failover and
Syslog Blocks (256 Bytes)
The 256-byte
blocks are mainly used for stateful failover messages. The active ASA generates
and sends packets to the standby ASA in order to update the translation and
connection table. During periods of bursty traffic where high rates of
connections are created or torn down, the number of available 256-byte blocks
may drop to 0. This drop indicates that one or more connections are not updated
to the standby ASA. This is generally acceptable because the next time around
the stateful failover protocol catches the xlate or connection that is lost.
However, if the CNT column for 256-byte blocks stays at or near 0 for extended
periods of time, the ASA cannot keep up with the translation and connection
tables that are synchronized because of the number of connections per second
that the ASA processes. If this happens consistently, upgrade the ASA to a
faster model.
Syslog
messages sent out from the ASA also use the 256-byte blocks, but they are not
generally released in such a quantity that causes a depletion of the 256-byte
block pool. If the CNT column shows that the number of 256-byte blocks is near
0, ensure that you do not log at Debugging (level 7) to the syslog server. This
is indicated by the logging trap line in the ASA configuration. It is
recommended that you set logging to Notification (level 5) or lower, unless you
require additional information for debugging purposes.
Example
Ciscoasa#show
blocks
SIZE
MAX LOW CNT
4
1600 1597 1600
80
400 399 400
256
500 495 499
1550
1444 1170 1188
16384
2048 1532 1538
sh xlate
sh conn
count
10.one of the
best issues u have troubleshooted with firewall ?
L2 adjacency
issue
Translation
problem- multiple devices on the path.
Firewall
does not build any connection through the box.
IPS- global
correlation signature update problem
How to map
multiple servers to the same IP ADDRESS.
Failover par
zero downtime code upgrade.
L2 adjacency
issue:
DMZ host not
able to reach inside host.
Object
network obj_192.168.5.5
Nat
(inside,dmz) static 14.36.109.7
Route inside
192.168.5.0 255.255.255.0 192.168.2.33
Initiate
10000 pings with zero timeout
->Xlation
in place
->Route
in place.
-> Captures
on dmz – packets visible on dmz, nothing on inside interface.
->sh asp
drop- no count with 10000 range.
->connection
is built in syslogs as well for icmp pings. ( built, teardown seen )
->NO ARP
ENTRY for router in question. ASA needs to have mac address of router in
question.
Translation
problem
User tries
to reach xyz.cisco.com, which resolves to a public ip address.
The website
is our own and resides on one of the internal interfaces.
The real ip
address of web server is a private ip address.
Userà routerà FWSMà ASAà outside routerà internet
Router
xlates user source to 14.36.85.0
Webserver
public address -> 14.36.90.0
User real
address -> 172.16.20.0
static
(inside,outside) 14.36.90.210 10.55.16.2
static
(inside,inside) 14.36.90.210 10.55.16.2
static
(inside,inside) 14.36.85.0 14.36.85.0 net 255.255.255.0
same-security-traffic
permit intra-interface
Replaced the
first line from above with the dns keyword:
static
(inside,outside) 14.36.90.210 10.55.16.2 dns
NO NEW
CONNECTIONS BUILT
Problem: No connections are built through
the FWSM. Unable to ping from the SVI (Switch Virtual Interface) to the
directly connected host on the outside.
Source IP: 10.10.10.1 / Source VLAN - 801
Destination IP: 14.36.109.35 / Destination VLAN – 36
Verified
basic Route, Translation and Permission. MSFC had a route configured for the
14.36.109.0/24 network via the admin context inside interface address
10.10.10.2
SW-6509#sh
run | i ip route
ip route
14.36.109.0 255.255.255.0 10.10.10.2
sh logg | i
10.10.10.2 shows no output at all
Configured
an access-list and captures on the ingress and egress interface. We see packets
ingress but not egress.
access-list
tac extended permit ip host 10.10.10.1 host 14.36.109.35
access-list
tac extended permit ip host 14.36.109.35 host 10.10.10.1
cap capin
interface inside access-list tac
cap capout
interface outside access-list tac
fwsm/admin/pri/act#
sh cap
capture
capin type raw-data access-list tac interface inside[Capturing - 168 bytes]
capture
capout type raw-data access-list tac interface outside[Capturing - 0 bytes]
fwsm/admin/pri/act#
sh cap capin
2 packets
seen, 2 packets captured
1:
00:07:32.1528865544 802.1Q vlan#801 P0 10.10.10.1 > 14.36.109.35: icmp: echo
request
2:
00:07:34.1528867534 802.1Q vlan#801 P0 10.10.10.1 > 14.36.109.35: icmp: echo
request
Though
configured, there are no existing x-lates on the box and no new ones are
getting built either.
fwsm/admin/pri/act#
sh run nat
nat (inside)
10 0.0.0.0 0.0.0.0
fwsm/admin/pri/act#
sh run global
global
(outside) 10 14.36.201.1-14.36.201.10 netmask 255.255.255.0
fwsm/admin/pri/act#
sh xlate
0 in use, 1
most used
What could
it be? FWSM receives the packets and simply does not process them. Denies to
build connections. WHY?
Pay close
attention to the “show log” output below:
fwsm/admin/pri/act#
sh logg
Syslog
logging: enabled
Facility: 20
Timestamp
logging: disabled
Name
logging: enabled
Standby
logging: disabled
Deny Conn
when Queue Full: disabled
Console
logging: disabled
Monitor logging:
level debugging, 7897 messages logged
Buffer
logging: level debugging, 11850 messages logged
Trap
logging: level debugging, facility 20, 76 messages logged
Logging to
inside 10.10.10.3 tcp/9999 disabled
History
logging: disabled
Device ID:
disabled
Mail
logging: disabled
ASDM
logging: disabled
TCP logging
is configured. It also shows that it is disabled meaning the syslog server is
not listening on 9999. It may be reachable via ICMP but it surely is not
listening on tcp port 9999.
So what if
the syslog server is not reachable?
If the
syslog server is udp (best effort) port 514 based then no problem.
If the
syslog server cannot be reached on the tcp port 9999
configured
then, no new connections will be built through the firewall.
This rule
applies to PIX, ASA as well as FWSM.
To mitigate
this issue “logging permit-hostdown” command must be added.
VPN
1.What is
site-site and remote access vpn?
What is vpn?
A virtual private network (VPN) extends a private network across a
public network, such as the Internet. It enables a
computer to send and receive data across shared or public networks as if it
were directly connected to the private network, while benefiting from the
functionality, security and management policies of the private network.[1] This
is done by establishing a virtual point-to-point connection
through the use of dedicated connections, encryption, or a combination of the
two.
A VPN connection across the Internet is similar to a wide area network (WAN) link
between the sites. From a user perspective, the extended network resources are
accessed in the same way as resources available from the private network.[2]
Site to
site?
A site-to-site
VPN allows offices in multiple fixed locations to establish secure connections
with each other over a public network such as the Internet.
Site-to-site VPN extends the company's network, making computer resources from
one location available to employees at other locations. An example of a company
that needs a site-to-site VPN is a growing corporation with dozens of branch
offices around the world.
There are
two types of site-to-site VPNs:
•
Intranet-based
-- If a company has one or more remote locations that they wish to join in a
single private network, they can create an intranet VPN to connect each
separate LAN to a single WAN.
Extranet-based
-- When a company has a close relationship with another company (such as a
partner, supplier or customer), it can build an extranet VPN that connects
those companies' LANs. This extranet VPN allows the companies to work together
in a secure, shared network environment while preventing access to their
separate intranets.
Remote-access
VPN
A remote-access
VPN allows individual users to establish secure connections with a remote
computer network. Those users can access the secure resources on that network
as if they were directly plugged in to the network's servers. An example of a
company that needs a remote-access VPN is a large firm with hundreds of
salespeople in the field.
2.What is phase 1 tunnel and the
parameters involved ?
Two-phase
protocol:
Phase 1
exchange:
Two peers
establish a secure, authenticated channel with which to communicate; Main mode
or Aggressive mode accomplishes a Phase 1 exchange.
There is
also a Transaction Mode, that sits between Phase 1 and Phase 2; (Phase 1.5)
which is used for Cisco Easy VPN (EzVPN) client scenario performing XAUTH or
client attributes (mode config).
Phase 2
exchange:
Security
associations are negotiated on behalf of IPSec services; Quick Mode
accomplishes a Phase 2 exchange.
Each phase
has its SAs: ISAKMP SA (Phase 1) and IPSec SA (Phase 2).
IKE and
ISAKMP
􀂃 IKE is a key exchange mechanism.
􀂃 It is typically used for establishing IPSec sessions.
􀂃 There are five variations of an IKE negotiation:
–Two modes
(aggressive mode and main mode)
–Three
authentication methods (preshared, public key
encryption,
and public key signature)
􀂃 IKE is a sheer key exchange protocol.
IKE: Main Mode (Phase 1)
MSG 1:
Initiator offers acceptable encryption and authentication algorithms (3DES,
MD5, or RSA)—i.e., the transform-set.
MSG 2:
Responder presents acceptance of the proposal (or not).
MSG 3:
Initiator Diffie-Hellman key and nounce (key value is usually a
number of
1024-bit length).
MSG 4:
Responder Diffie-Hellman key and nounce.
MSG 5:
Initiator signature, ID, and keys (maybe cert), i.e., authentication data.
MSG 6:
Responder signature, ID, and keys (maybe cert).
IKE: Aggressive Mode (Phase 1)
MSG 1:
Initiator key exchange, ID, nonce, and parameter proposal
MSG 2:
Responder key exchange, ID, nonce, and acceptable parameters
MSG 3:
Initiator signature, hash, and ID
IKE: Quick
Mode (Phase 2)
MSG 1: Hash,
SA proposal, IPSec transform, keying material, and
ID (proxy
identities, source, and destination)
,
Responder 􀂃 MSG 2:
Responder hash, agreed to SA proposal SPI, and key
􀂃 MSG 3: Hash to verify current and live peer
Now passing
encrypted traffic.
Easy explanation:
•
Router has a
packet that is about to be forwarded, and it notices that it matches a crypto
ACL.
•
Router looks
to see if there is an IPSec SA in place, if not....
•
Router looks
to see if there is an IKE Phase 1 SA in place, if not...
•
Router
becomes initiator, and sends over all of its IKE phase 1 policies.
•
Remote
router responds, by specifying which IKE phase 1 policy is a match.
•
Both peers
run DH, and generate shared secret keying material.
•
Both peer
authenticate with each other, using authentication method agreed to in IKE
phase 1 negotiations. (IKE phase 1 tunnel is now up.)
•
Using the
IKE phase 1 tunnel as a cloak of security, they two peers negotiate the details
of IKE Phase 2.
•
DH is not
run again, and shared secret keying material is used from the DH in IKE phase
1, unless PFS is used.
•
IKE phase 2
tunnel (AKA, the IPSec tunnel) is now in place, and the data is encapsulated
and sent through the tunnel.
I am
grateful that the mathematicians and engineers of these security protocols did
all the heavy lifting, and all we do is design networks that use the
technology, configure the gear to work correctly, and troubleshoot when life
happens.
Shared
secret keying material is created via DH during IKE phase 1.
This keying
material can be used by any symmetrical algorithm that wants to use this keying
material, or parts of that keying material.
IKE phase 1
creates an SA based on the encryption agreed between the peers.
IKE phase 2
creates an SA (the IPSec SA), based on the encryption agreed between the peers
(these would be the IPSec transform sets that are negotiated.
Phase 1
could use 3DES, and phase 2 could use AES. Both dip into the pool
of keying material created by DH during the IKE phase 1 process, but will
create 2 separate SAs.
4.What is PFS ?
Public-key
systems which generate random public keys per session for the purposes of key
agreement which are not based on any sort of deterministic algorithm
demonstrate a property referred to as perfect forward secrecy. This
means that the compromise of one message cannot lead to the compromise of
others, and also that there is not a single secret value which can lead to the
compromise of multiple messages.
5.Why is a DH
required ?
The Diffie–Hellman
key exchange method allows two parties that have no prior knowledge of each
other to jointly establish a shared secret key over an
insecure communications
channel.
This key can then be used to encrypt subsequent communications using a symmetric key cipher.
What is asymmetric key?
In an
asymmetric key encryption scheme, anyone can encrypt messages using the public
key, but only the holder of the paired private key can decrypt. Security
depends on the secrecy of the private key.
6.How to check
the status of the tunnel in phase 1 & 2 ?
sh cry isa sa
MM_NO_STATE
The ISAKMP
SA has been created, but nothing else has happened yet. It is
"larval" at this stage—there is no state.
MM_SA_SETUP
The peers
have agreed on parameters for the ISAKMP SA.
MM_KEY_EXCH
The peers
have exchanged Diffie-Hellman public keys and have generated a shared secret.
The ISAKMP SA remains unauthenticated.
MM_KEY_AUTH
The ISAKMP
SA has been authenticated. If the router initiated this exchange, this state
transitions immediately to QM_IDLE, and a Quick Mode exchange begins.
8.what is GRE and why it’s
required?
GRE
encapsulates packets into IP packets and redirects them to an intermediate
host, where they are de-encapsulated and routed to their final destination.
Because the route to the intermediate host appears to the inner datagrams as
one hop, Juniper Networks EX Series Ethernet switches can operate as if they
have a virtual point-to-point connection with each other. GRE tunnels allow
routing protocols like RIP and OSPF to forward data packets from one switch to
another switch across the Internet. In addition, GRE tunnels can encapsulate
multicast data streams for transmission over the Internet.
Encapsulation
and De-Encapsulation on the Switch
Encapsulation—A
switch operating as a tunnel source router encapsulates and forwards GRE
packets as follows:
•
When a
switch receives a data packet (payload) to be tunneled, it sends the packet to
the tunnel interface.
•
The tunnel
interface encapsulates the data in a GRE packet.
•
The system
encapsulates the GRE packet in an IP packet.
•
The IP
packet is forwarded based on its destination address and routing table.
De-encapsulation—A
switch operating as a tunnel remote router handles GRE packets as follows:
•
When the
destination switch receives the IP packet from the tunnel interface, the switch
checks the destination address.
•
The IP
header is removed, and the packet is submitted to the GRE protocol.
The GRE
protocol strips off the GRE header and submits the payload packet for
forwarding.
Multicast
support:
In many network scenarios you want to configure your network to use GRE
tunnels to send Protocol Independent Multicast (PIM) and multicast traffic
between routers. Typically, this occurs when the multicast source and receiver
are separated by an IP cloud which is not configured for IP multicast routing.
In such network scenarios, configuring a tunnel across an IP cloud with PIM
enabled transports multicast packets toward the receiver.
9.How can we
carry routing updates via IPSec without GRE?
You could configure L2TP + IPSec.
L2TP permit use of multicast.
Is possible only in remote access VPN ("client to site").
10.What is nat
traversal?
Background:
ESP encrypts all critical information, encapsulating the
entire inner TCP/UDP datagram within an ESP header. ESP is an IP protocol in
the same sense that TCP and UDP are IP protocols (OSI Network Layer 3),
but it does not have any port information like TCP/UDP (OSI Transport
Layer 4). This is a difference from ISAKMP which uses UDP port 500
as its transport layer.
PAT (Port Address Translation) is used to provide many hosts
access to the internet through the same publically routable ip address.
PAT works by building a database that binds each local host's ip address to the
publically routable ip address using a specific port number. In this
manner, any packet sourced from an inside host will have its IP header modified
by the PAT devcie such that the source address and port number are changed from
the RFC 1918 address/port to the publically routable ip address and a new
unique port. Referencing this binding database, any return traffic can be
untranslated in the same manner.
Q1: Why can't an ESP packet pass through
a PAT device?
It is precisely because ESP is a protocol without ports that
prevents it from passing through PAT devices. Because there is no port to
change in the ESP packet, the binding database can't assign a unique port to
the packet at the time it changes its RFC 1918 address to the publically
routable address. If the packet can't be assigned a unique port then the
database binding won't complete and there is no way to tell which inside host
sourced this packet. As a result there is no way for the return traffic
to be untranslated successfully.
Q2: How does NAT-T work with ISAKMP/IPSec?
NAT Traversal performs two tasks:
•
Detects if both ends support NAT-T
•
Detects NAT devices along the transmission path (NAT-Discovery)
Step one occurs in ISAKMP Main Mode messages one and two. If
both devices support NAT-T, then NAT-Discovery is performed in ISKAMP Main Mode
messages (packets) three and four. The NAT-D payload sent is a hash of
the original IP address and port. Devices exchange two NAT-D packets, one with
source IP and port, and another with destination IP and port. The receiving
device recalculates the hash and compares it with the hash it received; if they
don't match a NAT device exists.
If a NAT device has been determined to exist, NAT-T will change
the ISAKMP transport with ISAKMP Main Mode messages five and six, at which
point all ISAKMP packets change from UDP port 500 to UDP port 4500. NAT-T
encapsulates the Quick Mode (IPSec Phase 2) exchange inside UDP 4500 as
well. After Quick Mode completes, data that gets encrypted on the IPSec
Security Association is encapsulated inside UDP port 4500 as well, thus
providing a port to be used in the PAT device for translation.
To visualize how this works and how the IP packet is encapsulated:
•
Clear text packet will be encrypted/encapsulated inside an ESP
packet
•
ESP packet will be encapsulated inside a UDP/4500 packet.
NAT-T encapsulates ESP packets inside UDP and assigns both
the Source and Destination ports as 4500. After this encapsulation there
is enough information for the PAT database binding to build successfully.
Now ESP packets can be translated through a PAT device.
When a packet with source and destination port of 4500 is sent
through a PAT device (from inside to outside), the PAT device will change the
source port from 4500 to a random high port, while keeping the destination port
of 4500. When a different NAT-T session passes through the PAT device, it will
change the source port from 4500 to a different random high port, and so on.
This way each local host has a unique database entry in the PAT devices mapping
its RFC1918 ip address/port4500 to the public ip address/high-port.
Q3: What is the
difference between NAT-T and IPSec-over-UDP ?
Although both these protocols work similar, there are two main
differences.
•
When NAT-T is enabled, it encapsulates the ESP packet with UDP
only when it encounters a NAT device. Otherwise, no UDP encapsulation is done.
But, IPSec Over UDP, always encapsulates the packet with UDP.
NAT-T always use the standard port, UDP-4500. It is not
configurable. IPSec over UDP normally uses UDP-10000 but this could be any
other port based on the configuration on the VPN server.
. NAT-T is not defined for
AH because there’s no way to effectively work around the AH integrity
violation problem."
11.What are the ports involved in
nat traversal ?
udp 4500
IPS:
11.Diff between a IPS &
Firewall ?
(example
from cisco live)
A)
In a way, an
IPS can know inside vs. outside because an IPS can be aware of trusted
subnets...an IPS is more like antivirus than it is a firewall. With a firewall,
you will be letting ALL off a certain type of traffic through. Lets say you
forward all traffic on port 80 to your web server. It’s the job of the IPS to
look for abnormal traffic and block it. the IPS will get regular definition
updates (like antivirus) and those signatures will be looking for known issues,
like traffic that is known to cause DOS attacks or traffic that is known to
allow people to hack into your server.
The IPS is not designed to
function like a firewall, blocking all traffic on a certain port. It is
designed to look for KNOWN traffic that can cause your network problems.
B) Firewalls
3, 4 & 7: Because they deal with IP packets and port numbers this gives
them layer 3 & 4 and now they are starting to recognize layer 7
applications as well.
IPS 2,3,4 & 7: Just thinking at what an IPS signature can look
at.
Layered
security approach is always best because no one device or method can protect
against every type of attack. When just talking about firewalls and IPS/IDS I
do like putting my them "behind" my firewalls, even if I have layers
of firewalls in the network. This is so the IPS does not have to "process"
traffic that should not even be there just based on addressing alone. By
processing less traffic it also lessens the number of positive and false
positives to be looked at. Although placing them "in front" would not
be wrong.
C) With IPS
you get various features like anomaly detection, threat detection, data flow
pattern matching, signatures, global correlation....and many more which are not
there in a Firewall. Say for example a firewall will not be able to detect an
attack in which the data is deviating from it`s regular pattern, whereas an IPS
will detect and reset that connection as it has inbuilt anomaly detection. The
IPS can fight various virus/worms/trojans/Adware/Spyware which the Firewall
cannot unless you use the ASA-SSMs on it.
1.What is IPS and IDS .Tell me the
difference between them ?
An IDS does
just what its name tells us - it detects network intrusion. Simple
enough! However, the IDS is basically a "town crier" in that it
will notify other network devices about the attack, but does not directly defend
against the attack itself.
The IDS does
not receive traffic flows directly. Instead, the traffic flows are
mirrored to the IDS.
When
infected traffic does hit the network, the IDS will see this and take
appropriate action. The problem is that this appropriate action is not direct
action; since the IDS is not in the traffic flow, it has to inform a network
device that is in that flow that action must be taken.
By the time
the IDS detects an issue and notifies the appropriate network devices, the
beginning of the infected traffic flow is already in the network.
In contrast,
our Intrusion Prevention System (IPS) does sit in the middle of the traffic
flow - in this case, the IPS will actually be our Cisco router. When the
IPS detects a problem, the IPS itself can prevent the traffic from entering the
network.
Cisco's
website describes the IPS as a "restructuring" of the IDS.
Although
firewalls are effective at blocking some types of attacks, they have one major
weakness: You simply can't close all of the ports. Some ports are necessary for
things like HTTP, SMTP and POP3 traffic. Ports corresponding to these common
services must remain open in order for those services to function properly. The
problem is that hackers have learned how to pass malicious traffic through
ports that are commonly left open.
In response
to this threat, some companies started to deploy intrusion detection systems
4.What is promiscuous and inline
mode ?
5.What is a
signature ? tell me some signature engines?
Understanding
Signatures
Attacks or
other misuses of network resources can be defined as network intrusions.
Sensors that use a signature-based technology can detect network intrusions. A
signature is a set of rules that your sensor uses to detect typical intrusive
activity, such as DoS attacks. As sensors scan network packets, they use
signatures to detect known attacks and respond with actions that you define.
The sensor
compares the list of signatures with network activity. When a match is found,
the sensor takes an action, such as logging the event or sending an alert.
Sensors let you modify existing signatures and define new ones.
Signature-based
intrusion detection can produce false positives because certain normal network
activity can be misinterpreted as malicious activity. For example, some network
applications or operating systems may send out numerous ICMP messages, which a
signature-based detection system might interpret as an attempt by an attacker
to map out a network segment. You can minimize false positives by tuning your
signatures.
An example
of a signature engine:
•Atomic IP
•Atomic IP Advanced
•Service HTTP
•Service MSRPC
•Service RPC
•State (SMTP, ...)
•String ICMP
•String TCP
•String UDP
•Sweep
9.What are the event action
involved in inline mode?
Deny attacker inline—45
•Deny attacker victim pair
inline—40
•Deny attacker service pair
inline—40
•Deny connection inline—35
•Deny packet inline—35
•Modify packet inline—35
•Request block host—20
•Request block connection—20
•Reset TCP connection—20
•Request rate limit—20
Can you
explain stateful inspection?
Stateful
inspection, also known as dynamic packet filtering, is a firewall technology
that monitors the state of active connections and uses this information to
determine which network packets to allow through the firewall. Stateful
inspection has largely replaced an older technology, static packet filtering.
In static packet filtering, only the headers of packets are checked -- which
means that an attacker can sometimes get information through the firewall
simply by indicating "reply" in the header. Stateful inspection, on
the other hand, analyzes packets down to the application layer. By recording
session information such as IP addresses and port numbers, a dynamic packet
filter can implement a much tighter security posture than a static packet
filter can.
Can you
explain the concept of demilitarized zone?
The concept
of the DMZ, like many other network security concepts, was borrowed from
military terminology. Geopolitically, a demilitarized zone (DMZ) is an area
that runs between two territories that are hostile to one another or two
opposing forces' battle lines. The DMZ likewise provides a buffer zone that
separates an internal network from the often hostile territory of the Internet.
Sometimes it's called a "screened subnet" or a "perimeter
network," but the purpose remains the same.
What is IP
spoofing and how can it be prevented?
IP spoofing
is a mechanism used by attackers to gain unauthorized access to a system. Here,
the intruder sends messages to a computer with an IP address indicating that
the message is coming from a trusted host. This is done by forging the header
so it contains a different address and make it appear that the packet was sent
by a different machine. Prevention:-
•
Packet
filtering: - to allow packets with recognized formats to enter the network
•
Using
special routers and firewalls
Encrypting
the session
ASA Interview Question
1. Adaptive Security Algorithm
Adaptive
Security Algorithm (ASA) is a Cisco algorithm for managing stateful connections
for PIX Firewalls. ASA controls all traffic flow through the PIX firewall,
performs stateful inspection of packets, and creates remembered entries in
connection and translations tables. These entries are referenced every time
when traffic tries to flow back through from lower security levels to higher
security levels. If a match is found, the traffic is allowed through. Finally,
the ASA provides an extra level of security by randomizing the TCP sequence numbers
of outgoing packets in an effort to make them more difficult to predict by
hackers
2. Active FTP vs. Passive FTP, a
Definitive Explanation
There are
two types of FTP access:
user or
authenticated FTP and anonymous
User or
authenticated:
FTP. User FTP
requires an account on the server (in general, it is for users who already have
accounts on the machine and lets them access any files they could access if
they were logged in).
Anonymous:
Anonymous
FTP is for people who don't have an account and is used to provide access to
specific files to the world at large.
FTP uses two
separate TCP connections: one to carry commands and results between the client
and the
server
(commonly called the command channel ), and the other to carry any actual files
and directory listings transferred (the data channel ).
Normal Mode
or Active Mode
To start an
FTP session in normal mode, a client first allocates two TCP ports for itself,
each of them with a port number above 1024. It uses the first to open the
command channel connection to the server and then issues FTP's PORT command to
tell the server the number of the second port, which the client wants to use
for the data channel. The server then opens the data channel connection. This
data channel connection is backwards from most protocols, which open
connections from the client to the server.
This
backwards open complicates things for sites that are attempting to do
start-of-connection packet filtering to ensure that all TCP connections are
initiated from the inside, because external FTP servers will attempt to
initiate data connections to internal clients, in response to command
connections opened from those internal clients. Furthermore, these connections
will be going to ports known to be in an unsafe range.
Figure 17.1.
A normal-mode FTP connection
Passive Mode
To start a
connection in passive mode, an FTP client allocates two TCP ports for its own
use and uses the first port to contact the FTP server, just as when using
normal mode. However, instead of issuing the PORT command to tell the server
the client's second port, the client issues the PASV command. This causes the
server to allocate a second port of its own for the data channel (for
architectural reasons, servers use random ports above 1023 for this, not port
20 as in normal mode; you couldn't have two servers on the same machine
simultaneously listening for incoming PASV-mode data connections on port 20)
and tell the client the number of that port. The client then opens the data
connection from its port to the data port the server has
just told it
about.
Figure 17.2
shows a passive-mode FTP connection
Passive mode
is useful because it allows you to avoid start-of-connection filtering
problems. In passive mode, all connections will be opened from the inside, by
the client.
(Or)
In passive
mode, only the server is required to open up ports for incoming traffic.
3. Trace route and Ping command
working
Ping:
Ping relies
on the ICMP protocol, which is used to diagnose transmission conditions. For
this reason, it uses two types of protocol messages (out of the 18 offered by
ICMP):
•Type 0,
which corresponds to an "echo request" command, sent by the source
machine;
•Type 8,
which corresponds to an "echo reply" command, sent by the target
machine.
At regular
intervals (by default, every second), the source machine (the one running the
ping command) sends an "echo request" to the target machine. When the
"echo reply" packet is received, the source machine displays a line
containing certain information. If the reply is not received, a line saying
"request timed out" will be shown echo=source ip, source mac address
Trace Route:
Tracert
works by incrementing the TTL value by one for each ICMP Echo Request it sends,
then waiting for an ICMP Time Exceeded message. The TTL values of the Tracert
packets start with an initial value of one; the TTL of each trace after the
first is incremented by one. A packet sent out by Tracert travels one hop
further on each successive trip.
Figure 3.2
shows how Tracert works. Tracert is being run on Host A, and is following the
path to Host B. At Router 1 and Router 2, the TTL is decremented to 0, causing
each router to send an ICMP Time Exceeded message. When the ICMP Echo Request
is received at Host B, it sends back an ICMP Echo Reply.
Step-by-Step
Operation of the Tracert Tool
Example:
When you
execute a trace route command (ie trace route www.yahoo.com), your machine
sends out 3 UDP packets with a TTL (Time-to-Live) of 1. When those packets
reach the next hop router, it will decrease the TTL to 0 and thus reject the
packet. It will send an ICMP Time-to-Live Exceeded (Type 11), TTL equal 0
during transit (Code 0) back to your machine - with a source address of itself,
therefore you now know the address of the first router in the path.
Next your
machine will send 3 UDP packets with a TTL of 2, thus the first router that you
already know passes the packets on to the next router after reducing the TTL by
1 to 1. The next router decreases the TTL to 0, thus rejecting the packet and
sending the same ICMP Time-to-Live
Exceeded
with its address as the source back to your machine. Thus you now know the
first 2 routers in the path.
This keeps
going until you reach the destination. Since you are sending UDP packets with
the destination address of the host you are concerned with, once it gets to the
destination the UDP packet is wanting to connect to the port that you have sent
as the destination port, since it is
an uncommon
port, it will most like be rejected with an ICMP Destination Unreachable (Type
3), Port Unreachable (Code 3). This ICMP message is sent back to your machine,
which will understand this as being the last hop, therefore trace route will
exit, giving you the hops between you and the destination.
The UDP
packet is sent on a high port, destined to another high port. On a Linux box,
these ports were not the same, although usually in the 33000. The source port
stayed the same throughout the session; however the destination port was
increase by one for each packet sent out.
One note,
trace route actually sends 1 UDP packet of TTL, waits for the return ICMP
message, sends the second UDP packet, waits, sends the third, waits, etc., etc.,
etc.
If during
the session, you receive * * *, this could mean that that router in the path
does not return ICMP messages, it returns messages with a TTL too small to
reach your machine or a router with buggy software. After a * * * within the
path, trace route will still increment the TTL by 1, thus still continuing on
in the path determination.
8. What is Modular Policy?
Modular
Policy Framework provides a consistent and flexible way to configure security
appliance features. For example, you can use Modular Policy Framework to create
a timeout configuration that is specific to a particular TCP application, as
opposed to one that applies to all TCP applications.
Various module in ASA?
5500-x –
started doing IPS in software.
ASA-SM for
CAT 6K
CX Blade-
for context aware security.. Prism FOR management.
11. Explain
about Security Context. Explain about Active/Standby and Active/Active
14. What is
Firewall?
15. How to forcefully active
secondary firewall to active firewall?
Failover
active
16. Static NAT syntax?
Static
(inside,outside) <pub> <priv>
8.3+ syntax
object
network <name>
host <ip>
nat
(inside,outside) static <PUB>
17. About
SSL VPN?
18. Command for disable
anti-spoofing in ASA
no ip verify
reverse-path interface outside
22. How many packets are
exchanging in Main mode and aggressive mode?
6 and 3.
Details already done in earlier
question.
23. What is PFS?
Already
done.
What is tunnel group and group
policy?
Groups and
users are core concepts in managing the security of virtual private networks
(VPNs) and in configuring the ASA. They specify attributes that determine user
access to and use of the VPN. A group is a collection of users treated
as a single entity. Users get their attributes from group policies.
A connection profile identifies the group policy for a specific
connection. If you do not assign a particular group policy to a user, the
default group policy for the connection applies.
25. Command for allow
administrative access of SSH on firewall
cry key gen
rsa
domain-name
cisco.com
ssh 0 0
inside
26. How does
failover work?
30. What are
all routing protocol can support in ASA?
Rip
Eigrp
Ospf
31. Port no
for ESP and AH
bluff
question. No port numbers in these protocols.
32. What is the difference
between ESP and AH
The basic
difference is that ESP provides actual encryption. It encrypts the payload of
the packet and protects it from snooping.
AH only
provides message authentication. In other words, AH only lets the receiver
verify that the message is intact and unaltered, but it doesn't encrypt the
message by itself.
33. What is spoofing and what is
anti-spoofing ?
How does it work
IP spoofing attack is when an
intruder attempts to disguise itself by pretending to have the source IP
address of a trusted host to gain access to specified resources on a trusted
network. IP spoofing is basically forging or falsifying (spoofing) the source
IP addresses in IP packets. An intruder crafts an IP datagram with a source IP
address that does not belong to them.
Applications of IP spoofing
Many other
attacks rely on IP spoofing mechanism to launch an attack, for example SMURF
attack (also known as ICMP flooding) is when an intruder sends a large number
of ICMP echo requests (pings) to the broadcast address of the reflector subnet.
The source addresses of these packets are spoofed to be the address of the
target victim. For each packet sent by the attacker, hosts on the reflector
subnet respond to the target victim, thereby flooding the victim network and
causing congestion that results in a denial of service (DoS).
35. How
firewall process the packet (rule, route, nat)
Here is a
diagram of how the Cisco ASA processes the packet that it receives:
Here are the
individual steps in detail:
•
Packet is
reached at the ingress interface.
•
Once the
packet reaches the internal buffer of the interface, the input counter of the
interface is incremented by one.
•
Cisco ASA
will first verify if this is an existing connection by looking at its internal
connection table details. If the packet flow matches an existing connection,
then the access-control list (ACL) check is bypassed, and the packet is moved
forward.
If packet flow does not match an existing connection, then TCP
state is verified. If it is a SYN packet or UDP packet, then the connection
counter is incremented by one and the packet is sent for an ACL check. If it is
not a SYN packet, the packet is dropped and the event is logged.
•
The packet
is processed as per the interface ACLs. It is verified in sequential order of
the ACL entries and if it matches any of the ACL entries, it moves forward.
Otherwise, the packet is dropped and the information is logged. The ACL hit
count will be incremented by one when the packet matches the ACL entry.
•
The packet
is verified for the translation rules. If a packet passes through this check,
then a connection entry is created for this flow, and the packet moves forward.
Otherwise, the packet is dropped and the information is logged.
•
The packet
is subjected to an Inspection Check. This inspection verifies whether or not
this specific packet flow is in compliance with the protocol. Cisco ASA has a
built-in inspection engine that inspects each connection as per its pre-defined
set of application-level functionalities. If it passed the inspection, it is
moved forward. Otherwise, the packet is dropped and the information is logged.
Additional Security-Checks will
be implemented if a CSC module is involved.
•
The IP
header information is translated as per the NAT/PAT rule and checksums are
updated accordingly. The packet is forwarded to AIP-SSM for IPS related
security checks, when the AIP module is involved.
•
The packet
is forwarded to the egress interface based on the translation rules. If no
egress interface is specified in the translation rule, then the destination
interface is decided based on global route lookup.
•
On the
egress interface, the interface route lookup is performed. Remember, the egress
interface is determined by the translation rule that will take the priority.
•
Once a Layer
3 route has been found and the next hop identified, Layer 2 resolution is
performed. Layer 2 rewrite of MAC header happens at this stage.
The packet
is transmitted on wire, and Interface counters increment on the egress
interface.
39. ASA Can do vpn with other
vendor firewall?
YES, WHY
NOT.
41. IS it
support ISP redundancy? Yes.
Terminating
two ISPs on ASA/PIX-
ISP1------------------Internet
1.1.1.2 |
| |
| |
| |
1.1.1.1 |
PIX/ASA|2.2.2.1----2.2.2.2|ISP2
3.3.3.1
|
|
Internal Network
Lets say
customer has above setup, with ISP1 being the Primary ISP
and ISP2
being the Secondary ISP.
I'm assuming
that you all know how ISP failback is configured and
how it
functions. To summarize, in ISP failback all traffic goes out
using ISP1
and if it fails, ASA/PIX starts routing traffic via ISP2.
Scenario I
==========
Now,
customer does not want to configure ISP failback, but he needs
to route Web
(port 80,443) traffic via ISP2 and all other traffic
via ISP1.
This requires PBR, which is not supported on ASA/PIX, but
we can
configure a workaround on ASA/PIX to make it work.
Following
are the commands which will achieve it-
route ISP1 0
0 1.1.1.2 //Default
route pointing to ISP1
route ISP2 0
0 2.2.2.2 2 //Default route with
Metric 2 via ISP2
static
(ISP2,inside) tcp 0.0.0.0 80 0.0.0.0 80
static
(ISP2,inside) tcp 0.0.0.0 443 0.0.0.0 443
sysopt
noproxyarp inside
nat (inside)
1 0 0
global
(ISP1) 1 interface
global
(ISP2) 1 interface
That’s it !!
Now all the traffic destined to any address on port 80/443
will be
forcibly put on ISP2 interface and routed from there.
Note: This
stuff requires that we KNOW what the destination ports are,
if there is some traffic which uses
dynamic ports, like voice traffic
we will have to route it via ISP1 and
cannot make it route via ISP2.
Scenario II
===========
In the same
setup, if customer says that he wants half traffic to go
via ISP1 and
half traffic via ISP2, first you need to explain customer
that ASA is
NOT a load-balancer or packet-shaper. Hence we cannot
*truly*
achieve this, but we may configure ASA in such a manner that
traffic for
some destination IP address is routed via ISP1 and some
is routed
via ISP2. Following would be configuration commands in this
scenario-
nat (inside)
1 0 0
global
(ISP1) 1 interface
global
(ISP2) 1 interface
route ISP1
128.0.0.0 128.0.0.0 1.1.1.2
route ISP2
0.0.0.0 128.0.0.0 2.2.2.2
The first
creates a default route that routes addresses with the first
bit of 1 to
1.1.1.2 of ISP1.
The second
creates a default route that routes addresses with the first
bit of 0 to
2.2.2.2 of ISP2.
Note: This
will do traffic routing based on *Destination* IP addresses and
NOT based on traffic load. As I
mentioned, ASA is NOT a packet-shaper.
There are
few more setups regarding which I have sent emails. Like how to
use OSPF for
ECMP, if you do not have that email let me know. Also, if you
need the
mail I sent for ISP failback configuration again, do let me know.
Let me know
if you have questions regarding this.
46. What is Data Confidentiality?
Data
confidentiality This is done via encryption to protect data from eavesdropping
attacks; supported encryption algorithms include DES, 3DES, and AES.
47. What is Data Integrity?
Data
integrity and authentication This is done via HMAC functions to verify that
packets haven't been tampered with and are being received from a valid peer; in
other words, to prevent a man-in-the-middle or session hijacking attack.
Supported HMAC functions include MD5 and SHA-1.
48. Anti-replay
Anti-replay
detection This is done by including encrypted sequence numbers in data packets
to ensure that a replay attack doesn't occur from a man-in-the-middle device.
49. Explain about Main mode and
explain mode in Phase I?
ISAKMP/IKE
Phase 1 is basically responsible for setting up the secure management
connection. However, there are two modes for performing these three steps:
Main,
Aggressive Modes
Main Mode:
Main mode
performs three two-way exchanges totaling six packets. The three exchanges are
the three steps listed in the last section: negotiate the security policy to
use for the management connection, use DH to encrypt the keys for the
encryption algorithm and HMAC function negotiated in Step 1, and perform device
authentication using either pre-shared keys, RSA encrypted nonces, or RSA
signatures (digital certificates).
Main mode
has one advantage: the device authentication step occurs across the secure
management connection, because this connection was built in the first two
steps. Therefore, any identity information that the two peers need to send to
each other is protected from eavesdropping attacks. This is the Cisco default
mode for site-to-site sessions and for remote access connections that use
certificates for device authentication.
Aggressive Mode:
In
aggressive mode, two exchanges take place. The first exchange contains a list
of possible policies to use to protect the management connection, the public
key from the public/private key combination created by DH, identity
information, and verification of the identity information (for example, a
signature). All of this is squeezed into one packet. The second exchange is an
acknowledgment of the receipt of the first packet, sharing the encrypted keys
(done by DH), and whether or not the management connection has been established
successfully.
Aggressive
mode has one main advantage over main mode: it is quicker in establishing the
secure management connection. However, its downside is that any identity
information is sent in clear text; so if someone was eavesdropping on the
transmission, they could see the actual identity information used to create the
signature for device authentication. This shouldn't be a security issue, but if
you are concerned about this, you can always use main mode.
As I
mentioned in the last section, main mode is the default mode for Cisco VPNs
with one exception: Aggressive mode is the default mode with the Cisco remote
access VPN if the devices will be using group pre-shared keys for device authentication.
50. Explain about Transport mode
and Tunnel mode in Phase II?
Phase 2
Connection Modes
As I
mentioned in the last two sections, there are two types of modes that AH and
ESP can use to transport protected information to a destination:
Transport
mode, Tunnel mode
In transport
mode, the real source and destination of the user data are performing the
protection service. It becomes more difficult to manage as you add more and
more devices using this connection mode. This mode is commonly used between two
devices that need to protect specific information, like TFTP transfers of
configuration files or syslog transfers of logging messages.
In tunnel
mode, intermediate devices (typically) are performing the protection service
for the user data. This connection mode is used for site-to-site and remote
access connections. Because the original IP packet is protected and embedded in
AH/ESP and an outer IP header is added, the internal IP packet can contain
private IP addresses. Plus, if you're using ESP for encryption, the real source
and destination of the user data is hidden from eavesdroppers. The main
advantage of tunnel mode over transport mode is that the protection service
function can be centralized on a small number of devices, reducing the amount of
configuration and management required. Both of these modes were discussed in
detail in Chapter 1, "Overview of VPNs."
51. PPTP?
PPTP: PPTP
originally was developed by Microsoft to provide a secure remote access
solution where traffic needed to be transported from a client, across a public
network, to a Microsoft server (VPN gateway). One of the interesting items
about PPTP's implementation is that it is an extension of the Point-to-Point
Protocol (PPP). Because PPTP uses PPP, PPTP can leverage PPP's features. For
example, PPTP allows the encapsulation of multiple protocols, such as IP, IPX,
and NetBEUI, via the VPN tunnel. Also, PPP supports the use of authentication
via PAP, CHAP, and MS-CHAP. PPTP can use this to authenticate devices.
52. L2TP?
L2TP: L2TP
is a combination of PPTP and L2F. It is defined in RFCs 2661 and 3438. L2TP
took the best of both PPTP and L2F and integrated them into a single protocol.
Like PPTP, L2TP uses PPP to encapsulate user data, allowing the multiple
protocols to be sent across a tunnel. L2TP, like PPTP, extends the PPP
protocol. As an additional security enhancement, L2TP can be placed in the
payload of an IPSec packet, combining the security advantages of IPSec and the
benefits of user authentication, tunnel address assignment and configuration,
and multiple protocol support with PPP. This combination is commonly referred
to as L2TP over IPSec or L2TP/IPSec. The remainder of this chapter is devoted
to an overview of L2TP, how it is implemented, and the advantages it has over
PPTP.
What port does ping work over?
A trick
question, to be sure, but an important one. If he starts throwing out port
numbers you may want to immediately move to the next candidate. Hint: ICMP is a
layer 3 protocol (it doesn’t work over a port) A good variation of this
question is to ask whether ping uses TCP or UDP. An answer of either is a fail,
as those are layer 4 protocols.
How exactly does
traceroute/tracert work at the protocol level?
This is a
fairly technical question but it’s an important concept to understand. It’s not
natively a “security” question really, but it shows you whether or not they
like to understand how things work, which is crucial for an InfoSec
professional. If they get it right you can lighten up and offer extra credit for
the difference between Linux and Windows versions.
The key point people usually miss
is that each packet that’s sent out doesn’t go to a different place. Many
people think that it first sends a packet to the first hop, gets a time. Then
it sends a packet to the second hop, gets a time, and keeps going until it gets
done. That’s incorrect. It actually keeps sending packets to the final
destination; the only change is the TTL that’s used. The extra credit is the
fact that Windows uses ICMP by default while Linux uses UDP.
What’s the difference between
Diffie-Hellman and RSA?
Diffie-Hellman
is a key-exchange protocol, and RSA is an encryption/signing protocol. If they
get that far, make sure they can elaborate on the actual difference, which is
that one requires you to have key material beforehand (RSA), while the other
does not (DH). Blank stares are undesirable.
What kind of attack is a standard
Diffie-Hellman exchange vulnerable to?
Man-in-the-middle,
as neither side is authenticated.
Q. What’s
the difference between the WWW and the Internet?
A. This question will throw a lot
of people off, but it is absolutely valid. The Internet is a collection of
computers and networks that can all talk to each other, while WWW is an
application that runs on the Internet.
What are zero day attacks?
Zero-day
exploits occur when an exploit for vulnerability is created before, or on the
same day that a vulnerability becomes known to the world at large. IT
organizations are constantly fighting to keep their systems patched and
updated, but the reality is it takes time to adequately test a patch against
all applications running on the servers. This leaves organizations exposed to
the narrowing of the time between discovering a vulnerability and the time an
exploit is launched. As such, an attacker can effectively compromise
unprotected servers at will.
What are the essential
characteristics of an IPS?
These are
the essential characteristics of a good IPS device:
a. Block
known and unknown (including zero-day) attacks.
b. Never
block legitimate traffic even when under attack.
c. Since it
operates inline, it must be a resilient hardware solution that will not be a
single point of network failure.
d. Not
reliant on signatures as the primary form of defense (a method adopted by IPS
products that spawned from IDS technologies that are susceptible to false
positives).
e. Not add
any discernable latency under extreme load or attack, since this will
negatively impact business users.
f. Rapid
configuration for immediate protection with minimal ongoing operational
maintenance.
g. Access to
a centralized management solution that has meaningful reporting capabilities.
h. As
network capacity and performance increases over time, the IPS solution must be
scaleable inline with those requirements.
i. Cope with
new advanced types of security threats in the future.
j. Provide
relevant data for forensic analysis purposes and alert reporting.
k. Offer
fine-grained granularity to decide what type of malicious traffic is to be blocked
(for instance Web servers and email servers need to be configured differently).
l. Combine
rate-based and content-based protection on one device.
m. Post
sales support to provide updates on newly discovered vulnerabilities and advice
(signatures, patches or configuration updates) on how to protect against the
exploits.
What are the different types of
IPS devices?
The IPS
devices can be signature based or stateful inspection based.
What are the disadvantages for
using signature based IPS devices?
Signatures,
or pattern matching is one of a number of methods that are used in an IPS to
detect and block exploits of vulnerabilities. However, if used as the primary
protection mechanism, you will face limitations in what will be successfully
blocked. Signatures are prone for generating false positives, which means that
on their own, legitimate traffic will be blocked. In addition, attackers have
found ways around pattern matching methods by making relatively small changes
to the attack code that renders the detection useless; and therefore, not
successfully blocked by the IPS. Another trick commonly used is to send packets
out of order or through asymmetrical traffic routes. Unless the IPS has a
packet reorder engine and is fully Stateful, the attack will never be
recognized and will simply pass through to the ultimate target.
Where can I
find updates about new security holes?
You can find
updates on new security holes in security advisory websites. It is important
that a security administrator is updated about new security holes, as the
saying goes prevention is better than cure.
Some of the security advisories
are as listed below:
http://www.cert.org/
CERT (Computer Emergency Response
Team) was set up by a number of universities and DARPA in response to the
Morris Worm of 1988.
www.ciac.org
CIAC publishes security bulletins and virus and hoax information.
http://isc.sans.org/
This is another good advisory
from sans.org
What questions should be asked to
the IDS vendor?
The basic
questions include the following:
How good is
the reporting architecture?
How easy is it to manage false positives?
How long does it take to track
down alerts and identify the situation? How much manpower is needed to use this
product?
How many signatures does the system support?
What intrusion response features
does the product have?
What does it cost?
What would
be the Return on Investment?
The security administrator would need to
calculate this along with other departments in the organization and also the
security vendor.
What do signature
updates and maintenance cost?
Intrusion detection is much like virus
protection, a system that hasn't been updated for a year will miss common new
attacks.
At what
real-world traffic levels does the product become blind, in packets/second?
First, what
segments do you plan on putting the IDS onto? If you have only a 1.5-mbps
connection to the Internet that you want to monitor, you don't need the fastest
performing system. On the other hand, if you are trying to monitor a server
farm in your corporation in order to detect internal attacks, a hacker could
easily smurf the segment in order to blind the sensor. The most important
metric is packets/second.
How easy is
the product to evade?
Try to get in-depth information about this part. Some of the
simple evasion tactics to fool IDS include fragmentation, avoiding defaults,
slow scans, coordinated low bandwidth attacks, address spoofing/proxying, and
pattern change evasion.
How scalable
is the IDS system?
How many sensors does the system support? How big can the database
be? What are the traffic levels when forwarding information to the management
console? What happens when the management console is overloaded? These are some
questions you might want to be answered.
How are intrusions detected?
Anomaly
detection
The most common way people approach network intrusion detection is
to detect statistical anomalies. The idea behind this approach is to measure a
"baseline" of such stats as CPU utilization, disk activity, user
logins, file activity, and so forth. Then, the system can trigger when there is
a deviation from this baseline.
The benefit of this approach is that it can
detect the anomalies without having to understand the underlying cause behind
the anomalies.
For example, let's say that you monitor the traffic from
individual workstations. Then, the system notes that at 12am, a lot of these
workstations start logging into the servers and carrying out tasks. This is
something interesting to note and possibly take action on.
Signature
recognition
The majority of commercial products are based upon examining the
traffic looking for well-known patterns of attack. This means that for every
hacker technique, the engineers code something into the system for that
technique.
This can be as simple as a pattern match. The classic example is
to example every packet on the wire for the pattern "/cgi-bin/phf?",
which might indicate somebody attempting to access this vulnerable CGI script
on a web-server.
What is
Intrusion Detection?
Intrusion
Detection is the active process to document and catch attackers and malicious
code on a network. It is described in two types of software: Host based
software and Network based software.
Why is an Intrusion Detection System (IDS)
important?
Computers
connected directly to the Internet are subject to relentless probing and
attack.
While
protective measures such as safe configuration, up-to-date patching, and
firewalls are all prudent steps they are difficult to maintain and cannot
guarantee that all vulnerabilities are shielded. An IDS provides defense in
depth by detecting and logging hostile activities. An IDS system acts as
"eyes" that watch for intrusions when other protective measures fail.
What is the difference between a Firewall and
a Intrusion Detection System?
A firewall
is a device installed normally at the perimeter of a network to define access
rules for access to particular resources inside the network. On the firewall
anything that is not explicitly allowed is denied. A firewall allows and denies
access through the rule base.
An Intrusion Detection System is a software
or hardware device installed on the network (NIDS) or host (HIDS) to detect and
report suspicious activity.
In simple terms you can say that while a
firewall is a gate or door in a superstore, a IDS device is a security camera.
A firewall can block connection, while a IDS cannot block connection. An IDS
device can however alert any suspicious activities.
An Intrusion Prevention System is
a device that can start blocking connections proactively if it finds the
connections to be of suspicious in nature.
If an IDS device cannot prevent a hack, then
why have IDS devices?
Agreed that
an IDS device cannot prevent a hack and can only alert any suspicious
activities. However, if we are to go by past experiences, hacks and system
compromises are not something that happens over night. Planned compromise
attempts can take several days, weeks, months and in some cases even years. So
a IDS device can alert you so that you can take the desired precaution in
protecting the resources.
What is a
network based IDS system?
An IDS is a
system designed to detect and report unauthorized attempts to access or utilize
computer and/or network resources. A network-based IDS collects, filters, and
analyzes traffic that passes through a specific network location.
Are there
other types of IDS besides network based?
The other
common type of IDS is host-based. In host-based IDS each computer (or host) has
an IDS client installed that reports either locally or to a central monitoring
station. The advantage of a host-based IDS is that the internal operation and
configuration of the individual computers can be monitored.
What is the difference between Host based
(HIDS) and Network based IDS (NIDS)?
HIDS is
software which reveals if a machine is being or has been compromised. It does
this by checking the files on the machine for possible problems. Software
described as host based IDS could include File Integrity checkers (TripWire),
Anti-virus software (Norton AV, MacAfee), Server Logs (Event viewer or syslog),
and in some ways even backup software can be a HIDS. ISS Realsecure has many
HIDS products.
NIDS is
software which monitors network packets and examines them against a set of
signatures and rules. When the rules are violated the action is logged and the
Admin could be alerted. Examples of NIDS software are SNORT, ISS Real Secure,
Enterasys Dragon and Intrusion.
Are there are any draw backs of host based
IDS systems?
There are
three primary drawbacks of a host-based ID:
(1) It is
harder to correlate network traffic patterns that involve multiple computers;
(2) Host-based IDSs can be very
difficult to maintain in environments with a lot of computers, with variations
in operating systems and configurations, and where computers are maintained by
several system administrators with little or no common practices;
(3) Host-based IDSs can be
disabled by attackers after the system is compromised.
Why, when and where to use host based IDS
systems?
Host based
IDS systems are used to closely monitor any actions taking place on important
servers and machines. Host based IDS systems are used to detect any anomalies
and activities on these important and critical servers. You use Host based IDS
systems when you cannot risk the compromise of any server. The server has to be
very important and mission critical to use Host based IDS systems on these
servers. Host based IDS systems are agents that run on the critical servers.
The agent is installed on the server that is being monitored.
What is a Signature?
A signature
is Recorded evidence of a system intrusion, typically as part of an intrusion
detection system (IDS). When a malicious attack is launched against a system,
the attack typically leaves evidence of the intrusion in the system’s logs.
Each intrusion leaves a kind of footprint behind (e.g., unauthorized software
executions, failed logins, misuse of administrative privileges, file and
directory access) that administrators can document and use to prevent the same
attacks in the future. By keeping tables of intrusion signatures and
instructing devices in the IDS to look for the intrusion signatures, a system’s
security is strengthened against malicious attacks.
Because each signature is
different, it is possible for system administrators to determine by looking at
the intrusion signature what the intrusion was, how and when it was
perpetrated.
What are the common types of attacks and
signatures?
There are
three types of attacks:
Reconnaissance These include ping sweeps, DNS zone transfers,
e-mail recons, TCP or UDP port scans, and possibly indexing of public web
servers to find cgi holes.
Exploits Intruders will take advantage of
hidden features or bugs to gain access to the system.
Denial-of-service (DoS) attacks
Where the intruder attempts to crash a service (or the machine), overload
network links, overloaded the CPU, or fill up the disk. The intruder is not
trying to gain information, but to simply act as a vandal to prevent you from
making use of your machine.
The signatures are written based on these
types of attacks.
What are
policy scripts?
Policy
scripts are programs written to detect events. They contain the rules that
describe what sorts of activities are deemed troublesome. They analyze the
network events and initiate actions based on the analysis.
Can the
scripts take action?
Yes. Scripts
generate a number of output files recording the activity seen on the network (including
normal, non-attack activity). They also can generate alerts signifying that a
problem has been seen. In addition, scripts can execute programs, which can
terminate existing connections, block traffic from hostile hosts (by inserting
blocks into a router access control list), send e-mail messages, or page the
on-call staff.
What is a false positive?
Most IDS use
signatures to compare against attacks. Sometimes normal activity triggers the
IDS. The IDS detects an attack signature during normal activity. Part of
maintaining the IDS is knowing when what you are dealing with is a false
positive and tuning the IDS to avoid them.
What is a false negative?
Most IDS use
signatures to compare against attacks. Sometimes attack activity doesn't
trigger the IDS to cut alerts. This would mean that a real attack is happening
and the IDS are not sending an alert.
How can I test my IDS?
We suggest
the following steps:
1) Place the NIDS on a test network with a hub/switch and a
separate server.
2) Run a tool like Nessus against this server.
3) When Nessus is done, what
attacks did it detect? If it did not detect all the attacks does the NIDS have
the latest signatures? Can you write your own rules for the NIDS to catch the
attack?
4) After the
tests with Nessus, then run the packet building tools. Make various illegal
packets and aim them at the server. Does it detect the packets?
5) Repeat steps 2 - 4 against the
NIDS machine.
6) Harden the NIDS to help prevent it from being compromised.
7) Place it on the production
network and see how many false positives it gets.
8) Tune it down from the false
positives.
9) As new vulnerabilities occur, update the Nessus signatures and
test to see if the NIDS catches them.
What are some personal IDS/firewalls?
These are
softwares that are designed to be used on a single user or PC. While they don't
fit into the enterprise class of IDS, there are several programs that can
provide firewall and IDS services for a single user/pc. Here are a few:
What tools can
be used for building packets?
These are
some tools that can be used for building packets:
What is network Intrusion Prevention?
Intrusion
Prevention Systems (IPS) automatically detect and block malicious network and
application traffic, while allowing legitimate traffic to continue through to
its destination. An IPS must operate inline with minimal impact on network
latency and be scaleable to cope with the demands of a multi-gigabit network
environment.
Why do I need an Intrusion Prevention System (IPS) if I currently
have a Firewall and an Intrusion Detection System (IDS)?
Firewalls are typically deployed
at the network perimeter. However, many attacks can easily bypass the perimeter
and many are launched, sometimes inadvertently, from within the organization.
For example, consider the following situations:
• An employee who logs on to the
corporate network with a laptop computer that became infected while using it at
home.
• A
consultant who downloads malware from their corporate network, while working at
your facility and inadvertently spreads it onto your network.
• Remote users who log on using a
virtual private network.
• Disgruntled employees.
An IDS might be effective at
detecting suspicious activity, but it does not provide adequate protection
against attacks. Worm attacks, such as Slammer and Blaster, spread so rapidly
that by the time an alert is generated, the damage has already been done.
To be effective, an intrusion
prevention solution must be inline and able to automatically detect and block
malicious packets within normal network traffic before the malicious payload
causes any damage. This prevention must occur under extreme traffic loads and
more importantly, good traffic must never be blocked, even while under an
attack. Finally, the IPS device must operate with switch-like latency at all
times.
Given these
parameters for defining an effective intrusion prevention solution, it is
simple to see why simply adding blocking capabilities to existing security
infrastructure, such as firewalls and IDS, is not an effective intrusion
prevention solution.
The concept of blocking malicious network traffic before it
reaches its intended targets is simple. However, given the increasing
sophistication of attacks and the sheer brut force, security managers need an
IPS solution that can cope with these demands.
OSPF
“Why doesn't the internet use OSPF?”
Finding the
shortest path on a weighted, directed graph is computationally
hard, and takes considerable time, even on today’s routers. Thankfully Edsger W. Dijkstra made
this better with his SPF algorithm, but it’s still tough. This is the main
reason OSPF can’t be used on the Internet, and you don’t want to squirt your
full BGP Internet routing table into OSPF. Every time a network is deleted or
added, an SPF recalculation happens.
Describe OSPF
in your own words.
•
OSPF is a
fast-converging, link-state IGP used by millions.
•
OSPF forms
adjacencies with neighbors and shares information via the DR and BDR using Link
State Advertisements.
Areas in
OSPF are used to limit LSAs and summarize routes. Everyone connects to area
zero, the backbone.
OSPF areas, the
purpose of having each of them
Types of OSPF
LSA, the purpose of each LSA type
What exact LSA
type you can see in different areas
How OSPF establishes
neighbor relation, what the stages are
If OSPF router
is stuck in each stage what the problem is and how to troubleshoot it
http://www.cisco.com/en/US/tech/tk365/technologies_tech_note09186a0080094050.shtml?referring_site=bodynav
STP
How it works and the purpose
root election
Diff. port stages and timing for
convergence
Draw the typical diagram and
explain how diff types of STP work
What ports are blocking or
forwarding
How it works if there are
topology changes
Explain VLANs
How a L2 switch works with
broadcast, unicast, multicast, known/unknown traffic
Need to
refer to book for details.
What is HSRP and how it works?
PIM SPARSE AND DENSE MODES
GENERAL
Differences between RADIUS AND
TACACS:
What is an
Unnumbered Interface?
Consider the
network shown below. Router A has a serial interface S0 and an Ethernet
interface E0.
Router A's
Ethernet 0 interface can be configured with an IP address as shown below:
interface
Ethernet0
ip address
172.16.10.254 255.255.255.0
Logically,
to enable IP on interface S0, you would need to configure a unique IP address
on it. However, it is also possible to enable IP on the Serial interface and
bring it up without assigning a unique IP address to it. This is done by
borrowing an IP address already configured on one of the router's other
interfaces. To do this, the ip unnumbered interface mode command is used
as shown below.
interface
Serial 0
ip
unnumbered Ethernet 0
The ip
unnumbered <type> <number> interface mode command borrows the
IP address from the specified interface to the interface on which the command
has been configured. Use of the ip unnumbered command results in the IP
address being shared by two interfaces. Thus, in our example, the IP address
which was configured on the Ethernet interface is also assigned to the Serial
interface, and both interfaces involved function normally. This can be verified
using the output of the show ip interface brief command, as shown below:
RouterA# show
ip interface brief
Interface IP-Address OK?
Method Status Protocol
Ethernet0 172.16.10.254 YES
manual up up
Serial0 172.16.10.254 YES
manual up up
As you can
see from the output of the show ip interface brief command above, the
serial interface has an IP address identical to that of the Ethernet interface,
and both interfaces are fully functional. The interface that borrows its
address from one of the router's other functional interfaces is called the
"unnumbered interface". In our example, Serial 0 is the unnumbered
interface.
The only
real disadvantage that the unnumbered interface suffers from is that it is
unavailable for remote testing and management. You should also remember that
the unnumbered interface should borrow its address from an interface that is up
and running. If the unnumbered interface is pointing to an interface that is
not functional (that is, which does not show "Interface status UP",
"Protocol UP"), the unnumbered interface does not work. This is
precisely why it is recommended that the unnumbered interface point to a loopback
interface since loopbacks do not fail. Finally, remember that the ip
unnumbered command works on point-to-point interfaces only. When you
configure the command on the Multi-access interface (that is, Ethernet) or the
loopback interface, the following messages are displayed:
RouterA(config)#
int e0
RouterA(config-if)#
ip unnumbered serial 0
Point-to-point
(non-multi-access) interfaces only
RouterA(config-if)#
ip unnumbered loopback 0
Point-to-point
(non-multi-access) interfaces only
1.Diff between
TCP & UDP?
Difference between TCP and UDP
TCP
|
UDP
|
Reliability: TCP is connection-oriented protocol. When a file or message send it will
get delivered unless connections fails. If connection lost, the server will
request the lost part. There is no corruption while transferring a message.
|
Reliability: UDP is connectionless protocol. When you a send a data or message, you
don't know if it'll get there, it could get lost on the way. There may be
corruption while transferring a message.
|
Ordered: If you send two messages along a connection, one after the other, you
know the first message will get there first. You don't have to worry about
data arriving in the wrong order.
|
Ordered: If you send two messages out, you don't know what order they'll arrive
in i.e. no ordered
|
Heavyweight: - when the low level parts of the TCP "stream" arrive in the
wrong order, resend requests have to be sent, and all the out of sequence
parts have to be put back together, so requires a bit of work to piece
together.
|
Lightweight: No ordering of messages, no tracking connections, etc. It's just fire
and forget! This means it's a lot quicker, and the network card / OS have to
do very little work to translate the data back from the packets.
|
Streaming: Data is read as a "stream," with nothing distinguishing where
one packet ends and another begins. There may be multiple packets per read
call.
|
Datagrams: Packets are sent individually and are guaranteed to be whole if they
arrive. One packet per one read call.
|
Examples: World Wide Web (Apache TCP port 80), e-mail (SMTP TCP port 25 Postfix
MTA), File Transfer Protocol (FTP port 21) and Secure Shell (OpenSSH port 22)
etc.
|
Examples: Domain Name System (DNS UDP port 53), streaming media applications such
as IPTV or movies, Voice over IP (VoIP), Trivial File Transfer Protocol
(TFTP) and online multiplayer games etc.
|
WHAT IS ARP,RARP, PROXY ARP,
GRATUTIOUS ARP
REFER BOOK
7.what is DHCP
relay agent ? if DHCP server locates in a different subnet , how would the
process works?
http://www.cisco.com/en/US/tech/tk648/tk361/technologies_tech_note09186a00800f0804.shtml
8 What is MTU
and fragmentation ?
“Explain the
DDoS mitigation techniques and how they work.”
Describe the
SSL communications between a server and a host's web browser.
Since
protocols can operate either with or without TLS (or SSL), it is necessary for
the client to indicate
to the server whether it
wants to set up a TLS connection or not. There are two main ways of achieving
this; one option is to use a different port number for TLS connections (for
example port 443 for HTTPS). The other
is to use the regular port number and have the client request that the server
switch the connection to TLS using a protocol-specific mechanism (for example STARTTLS for mail and news protocols).
Once the
client and server have decided to use TLS, they negotiate a stateful connection
by using a handshaking procedure.[6] During this handshake, the client and
server agree on various parameters used to establish the connection's security:
•
The client
sends the server the client's SSL version number, cipher settings,
session-specific data, and other information that the server needs to
communicate with the client using SSL.
•
The server
sends the client the server's SSL version number, cipher settings,
session-specific data, and other information that the client needs to
communicate with the server over SSL. The server also sends its own
certificate, and if the client is requesting a server resource that requires
client authentication, the server requests the client's certificate.
•
The client
uses the information sent by the server to authenticate the server—e.g., in the
case of a web browser connecting to a web server, the browser checks whether
the received certificate's subject name actually matches the name of the server
being contacted, whether the issuer of the certificate is a trusted certificate authority,
whether the certificate has expired, and, ideally, whether the certificate has
been revoked.[7] If the server cannot be authenticated, the user is warned of
the problem and informed that an encrypted and authenticated connection cannot
be established. If the server can be successfully authenticated, the client
proceeds to the next step.
•
Using all
data generated in the handshake thus far, the client (with the cooperation of
the server, depending on the cipher in use) creates the pre-master secret for the session, encrypts it with
the server's public key (obtained from the server's certificate, sent in step
2), and then sends the encrypted pre-master secret to the server.
•
If the
server has requested client authentication (an optional step in the handshake),
the client also signs another piece of data that is unique to this handshake
and known by both the client and server. In this case, the client sends both
the signed data and the client's own certificate to the server along with the
encrypted pre-master secret.
•
If the
server has requested client authentication, the server attempts to authenticate
the client. If the client cannot be authenticated, the session ends. If the
client can be successfully authenticated, the server uses its private key to
decrypt the pre-master secret, and then performs a series of steps (which the
client also performs, starting from the same pre-master secret) to generate the
master secret.
•
Both the
client and the server use the master secret to generate the session keys, which
are symmetric keys used to encrypt and decrypt information exchanged during the
SSL session and to verify its integrity (that is, to detect any changes in the
data between the time it was sent and the time it is received over the SSL
connection).
•
The client
sends a message to the server informing it that future messages from the client
will be encrypted with the session key. It then sends a separate (encrypted)
message indicating that the client portion of the handshake is finished.
•
The server
sends a message to the client informing it that future messages from the server
will be encrypted with the session key. It then sends a separate (encrypted)
message indicating that the server portion of the handshake is finished.
The SSL
handshake is now complete and the session begins. The client and the server use
the session keys to encrypt and decrypt the data they send to each other and to
validate its integrity.
“Difference
between RIP & EIGRP
Routing protocols comparison
How would you
troubleshoot a packet loss in the IPSec tunnel?
“TCP windowing
in detail”
In a TCP
session, how many sliding windows are there? Is it one which is shared between
client and server or two for client and server?”
How switch
communicates , create MAC table .How two user connected to different subnet
communicate each other.”
Refer to
book
In EIGRP, what is a Stuck in Active route?
When EIGRP
returns a stuck in active (SIA) message, it means that it has not received a
reply to a query. EIGRP sends a query when a route is lost and another feasible
route does not exist in the topology table. The SIA is caused by two sequential
events:
•
The route
reported by the SIA has gone away.
•
An EIGRP
neighbor (or neighbors) have not replied to the query for that route.
When the SIA
occurs, the router clears the neighbor that did not reply to the query. When
this happens, determine which neighbor has been cleared. Keep in mind that this
router can be many hops away.
HOW DOES TRACEROUTE WORK?
Yes - this
is a very tricky application to master since there are so many different
implementations. For example, Windows uses ICMP echoes by default, while most
Linux operating systems use UDP by default, with the option to use ICMP. The
Cisco IOS uses UDP, and there are even some implementations in the field that
rely on TCP.
While there
are many, many different implementations, the goal of traceroute is always the
same. Traceroute seeks to have the routers between the source and destination
identify themselves, and then have the destination repond to the source
management station to confirm its reachability.
In the case
of ICMP, the routers identify themselves using Time Exceeded ICMP packets
back to the source when the TTL is decremented to zero. The destination
can respond to traceroute using an ICMP echo request.
For more
information on Cisco's implementation of both ping and traceroute - check out:
What is a
wildcard mask, and how is it different from a netmask?
Anyway...
Access Lists actually came before subnet masks. Remember way back when we lived
in an evil classful world. So back in like 1985, when access-lists came about
it was actually easier to code in assembler to do a NAND operation instead of
an AND. Thus the wildcarding.
When we
evolved into subnets (isn't everyone studying for their CCENT/CCNA exams so
incredibly happy about that progress?) someone figured out not only that normal
human beings weren't used to thinking "backwards" like the ACL masks,
but also there had to be some backwards compatibility with all the ancient IOS
versions. So subnet masks being "new' took their own form. ACLs being
"legacy" stayed the same.
A wildcard
mask is a mask
of bits that indicates which
parts of an IP address
are available for examination. In the Cisco IOS,[1] they are used
in several places, for example:
•
To indicate
the size of a network or subnet for some routing protocols, such as OSPF.
At a
simplistic level a wildcard mask can be thought of as an inverted subnet mask. For example, a
subnet mask of 255.255.255.0 (binary equivalent =
11111111.11111111.11111111.00000000) inverts to a wildcard mask of 0.0.0.255.
A wild card
mask is a matching rule [2] The rule for a wildcard mask is:
•
0 means that
the equivalent bit must match
1 means that
the equivalent bit does not matter
Subnet mask—A 32-bit combination used to
describe which portion of an address refers to the subnet and which part refers
to the host.
What is cidr?
CIDR
Classless
Interdomain Routing (CIDR) was introduced to improve both address space
utilization and routing scalability in the Internet. It was needed because of
the rapid growth of the Internet and growth of the IP routing tables held in
the Internet routers.
CIDR moves way
from the traditional IP classes (Class A, Class B, Class C, and so on). In CIDR
, an IP network is represented by a prefix, which is an IP address and some
indication of the length of the mask. Length means the number of left-most
contiguous mask bits that are set to one. So network 172.16.0.0 255.255.0.0 can
be represented as 172.16.0.0/16. CIDR also depicts a more hierarchical Internet
architecture, where each domain takes its IP addresses from a higher level.
This allows for the summarization of the domains to be done at the higher
level. For example, if an ISP owns network 172.16.0.0/16, then the ISP can
offer 172.16.1.0/24, 172.16.2.0/24, and so on to customers. Yet, when
advertising to other providers, the ISP only needs to advertise 172.16.0.0/16.
WHAT IS EZVPN?
Easy VPN is
a Cisco way of doing Remote Access VPNs. The idea behind it is to
configure
Secure Gateway (the device which terminates Remote Access VPNs)
and minimize
configuration burden on the Client.
This
technology has been developed for Cisco IPSec Client and so-called
hardware
clients i.e. ASA 5505 or IOS routers.
In EasyVPN
the Client does not need to configure any ISAKMP or IPSec
parameters,
all those parameters are negotiated during the connection. The
EasyVPN
Server must use Diffie-Hellman Group 2 to be able to negotiate
parameters
with the client. Because the first aggressive mode packet contains
the
Diffie-Hellman public value, only a single Diffie-Hellman group may be
specified in
the proposal. Each client must however supply EasyVPN Group
name and
password to be used for authentication and policy configuration. The
policy is a
bunch of attributes that may be sent down to the clients during the
connection.
Those attributes/parameters include DNS/WINS server, domain
name, IP
address pool, etc.
Easy VPN
uses IKE Aggressive mode for connection, so that the group name is
sent to the
EasyVPN Server in the very first message. The group name is not
encrypted so
that it is easy to sniff. Hence, there was another security
mechanism
configured called Extended Authentication (XAuth for short). This
requires
supplying additional user credentials during IKE Phase 1.5. This phase
is already
secured by ISAKMP SA so that all information is encrypted.
Difference
between link state and distance vector protocols
5. What happens
when we type google.com in the browser? Explain layer-by-layer.”
DHCP
Have you
ever thought how your computer gets an IP address? Well, it is important to
know that there are two ways through which a computer gets an IP address. One
is static while the other is dynamic.
Static
method is the one in which the computer administrator manually sets the IP
address to the machine. If your machine is connected to a network like LAN then
one thing is to be kept in mind that the IP address being set should not be the
same as the IP address of any other machine on the same network as this may
lead to IP address conflict and none of the two machines will be able to access
the internet.
Dynamic
method is the one in which the computer (on system boot) asks a server to
assign an IP address to it. The protocol used for this process is known as
Dynamic Host Control Protocol (DHCP). The server referenced here is known DHCP
server. This server is responsible for assigning IP addresses to all the
computers on the network. It is the responsibility of the DHCP server to make
sure that there is no IP address conflict. If one of the machine goes down and
then again boots up then a fresh DHCP request is sent to the server which may
assign the same or some different IP address this time. Usually a pool of
IP addresses is given to the DHCP server and it uses only those IP addresses
for assignments. This is done to safely use other IP addresses for static
assignments without any conflict.
DNS
Most of us
would have used google.com for internet search but have you ever thought on how
it is made sure that typing google.com in our web browser will actually contact
the correct server? Well, to understand this, we need to understand the concept
of Domain name server (DNS).
In real life
as people are identified by their name, similarly in computer networks,
individual computers are identified through the IP address assigned to them. IP
addresses can be of two types : public and private. Usually the servers use
public IPs as they are contacted by millions of computers world wide. While
your computer which is connected behind the router is usually assigned private
IP. Since there is a limited number of public IPs that are available so the
concept of private IPs in a network (behind a router with public IP) has grown
popular and successful. The broader level concept used for this is known as NAT
or Network address translation.
Remembering
IP address is a bit difficult task for humans so each server also has a name
(like google.com). So, end users just need to remember the name and type it in
their web browser and hit enter. Now, the lets come to the story about what
happens when the user hits enter after typing name in web browser. The first
thing which is required is to convert the domain name to the corresponding IP.
To accomplish this, a request is sent to the default gateway (which in most of
the cases is the router) to contact the DNS server. The router has a configured
DNS server IP to which this request is sent.
DNS servers
are used to convert the domain name to IP address. When a request is received
by the DNS server, it checks whether it has the required information. If this
conversion information is not present then the DNS server forwards this request
to the other DNS server. In this way, the domain name to IP address conversion
is done and is sent back.
Once the IP
is known then a normal HTTP GET request to that particular IP is made and
things move on.
Post DNS,
how things move on?
To
understand the following explanation one should have a basic knowledge of TCP/IP protocol
suite layers. Still we’ll try to keep the explanation basic here.
•
Once the IP
address is known through the DNS process, an HTTP GET request is prepared at
the application layer. This request is then forwarded to the Transport layer.
•
There are
two protocols (TCP and UDP) that are majorly used at this layer. It is at this
layer the requests are encapsulated in form of transport layer packets. If TCP
is being used then it also takes care that packet size should not exceed lowest
MTU in the path between source and destination. This is done to avoid
fragmentation of packet somewhere in the middle of its journey. On the other
hand if UDP is being used then this special care is not taken and as a result
packets can get fragmented.
•
Once the
packet is formed at transport layer, it is pushed to the IP layer. This layer
adds the information related to source and destination IP addresses and some
other important information like TTL (time to live), fragmentation information
etc. All this information is required while the packet is on its way to the
destination.
•
After this
the packet enters the data link layer where the information related to MAC
addresses is added and then the packet is pushed on to the physical layer. So a
stream of 0′s and 1′s is sent out of your NIC onto the physical media.
If the
destination of the packet is not directly connected to the source computer then
through the routing information present on the source computer, the packet is
transmitted to the nearest relevant computer node. There can be various nodes
in a network like routers, bridges, gateways etc. Each entity has its own
importance like a router is used for forwarding the packet, a bridge is used
for connecting networks using same protocol while gateways are used for
connecting networks with different protocols.
If we
consider a basic network then routers are the main agents which play a vital
role in forwarding the packet from source to destination. When the packet first
leaves the source computer then the mac address of the relevant router (to which
the packet is being transferred) is used as its destination mac address.
When the
packet reaches to that router, then the router performs the following action :
•
It decreases
the TTL value and recomputes the check-sum of the packet.
•
The router
searches its routing information table for the complete host address as
specified by the packet’s destination IP address. If found then router takes
action to forward the packet to the relevant host.
•
If no such
entry is found then the table is searched for the network address derived from
the destination IP. If found then router forwards the packet to that particular
network.
•
If above two
checks fail then the packet is transferred to the the default router as derived
from the default entry in its routing information table.
In any of
the above cases, whenever the packet is transferred by router to some other
router or to the destination, the destination mac address of the packet is
changed to the immediate router or destination to which it is being sent. In
this way the IP address information in the packet remains the same but the
destination mac address changes from one router to another. So in this
way, the packet travels from one router to another until it reaches the
destination.
Now, at the
destination:
•
The packet
is first received at the physical layer which issues an IRQ to the CPU to
indicate that some data is arrived and is waiting to be processed.
•
After this
the data is sent up to the data link layer where MAC layer is checked to see if
this packet is indeed for this computer only.
•
If the above
check is passed then this packet is passed to IP layer where some IP address
checks and check-sum verifications are done and then it is passed on to the
relevant transport layer protocol.
•
Once this is
done, then from the knowledge of the ports the information (or the HTTP GET
request in our case) is passed on the application listening on that port.
•
This way the
request reaches the Google web server.
After this
the response is formed and transmitted back in the same way as described above.
There you
have it. This is how a data packet travels from source to destination in the
Internet.
What is the administrative distance of EIGRP, eBGP, iBGP?
What is needed on a router interface to allow DHCP to function on
a subnet?
We use DHCP Relays when DHCP client and
server don’t reside on the same (V)LAN, as is the case in this scenario. The
job of the DHCP relay is to accept the client broadcast and forward it to the
server on another subnet. The packet is set to the destination is sent as a
unicast, but it could ultimately be a directed broadcast towards
multiple servers. Let me elaborate on this a little bit. In our scenario, the
DHCP server is 192.168.145.5. It stands to reason that R4 would forward
incoming DHCP Discover message to this IP address. What would happen if R5 was
dead and not responding? Well, we could configure a secondary server on the
same subnet, say on 192.168.145.55 address, but how will we then configure the
Relay? We could either wait for server to fail and configure the secondary
address, or we could configure multiple relays. Again, what if we wanted a
seriously large DHCP Server cluster with tens of servers? We’d need to specify
the list of relays, but this wouldn’t scale. Alternatively, we could configure
the destination address to be 192.168.145.255, which would be sent as a
broadcast once it reaches the router connected to the 192.168.145.0/24 subnet.
In our case, R4 would simply send the broadcast directly. Care should be taken
with this approach since “ip directed-broadcast” is usually disabled by default
and this kind of a message may be dropped. If this functionality is required,
the directed broadcast support must be enabled.
This is all fine and dandy, but how
do we configure DHCP Relay? As it turns out, it’s rather simple – using “ip
helper-address” command.
R4:
interface FastEthernet0/0
ip helper-address 192.168.145.5
!
HOW DHCP WORKS
Here are the
steps :
•
Step
1: When the
client computer (or device) boots up or is connected to a network, a
DHCPDISCOVER message is sent from the client to the server. As there is no
network configuration information on the client so the message is sent with
0.0.0.0 as source address and 255.255.255.255 as destination address. If the
DHCP server is on local subnet then it directly receives the message or in case
it is on different subnet then a relay agent connected on client’s subnet
is used to pass on the request to DHCP server. The transport protocol used for
this message is UDP and the port number used is 67. The client enters the
initializing stage during this step.
•
Step
2: When the
DHCP server receives the DHCPDISCOVER request message then it replies with a
DHCPOFFER message. As already explained, this message contains all the network
configuration settings required by the client. For example, the yaddr field of
the message will contain the IP address to be assigned to client. Similarly the
the subnet mask and gateway information is filled in the options field. Also,
the server fills in the client MAC address in the chaddr field. This message is
sent as a broadcast (255.255.255.255) message for the client to receive it
directly or if DHCP server is in different subnet then this message is sent to
the relay agent that takes care of whether the message is to be passed as
unicast or broadcast. In this case also, UDP protocol is used at the transport
layer with destination port as 68. The client enters selecting stage during
this step
•
Step
3: The client
forms a DHCPREQUEST message in reply to DHCPOFFER message and sends it to the
server indicating it wants to accept the network configuration sent in the
DHCPOFFER message. If there were multiple DHCP servers that received
DHCPDISCOVER then client could receive multiple DHCPOFFER messages. But, the client
replies to only one of the messages by populating the server identification
field with the IP address of a particular DHCP server. All the messages from
other DHCP servers are implicitly declined. The DHCPREQUEST message will still
contain the source address as 0.0.0.0 as the client is still not allowed to use
the IP address passed to it through DHCPOFFER message. The client enters
requesting stage during this step.
Step 4: Once the server receives
DHCPREQUEST from the client, it sends the DHCPACK message indicating that now
the client is allowed to use the IP address assigned to it. The client enters
the bound state during this step.
What is a
broadcast storm?
Within the
scope of switching, you might consider the following :
Redundant
links (in switched LANs) cause switching loops.
Switching
loops lead to these problems:
1- broadcast
storm
2- multiple
frame copies (duplicated unicast frames)
3- thrashing
the MAC table (confused about the location of the devices)
TCP CONNECTION SEQUENCE:
WHAT IS MTU?
Maximum
transmission unit (MTU) defines the largest size of packets that an
interface can transmit without the need to fragment. IP packets larger than the
MTU must go through IP fragmentation procedures.
What other TCP
setting can you modify besides MTU to shorten packets?
CHANGE THE
TCP MSS
What is a
Martian? (martian packet) - This one took some googling to come up with ...
The name is derived from packet from Mars, a place where
packets clearly can not originate.[3]
BOGONS
WHAT PORT DOES ICMP USE
ICMP has
it's own protocol number (in a similar to the L4 protocol numbers that TCP and
UDP have).
TCP is
protocol 6
UDP is
protocol 7
ICMP is
protocol 1
(Some people
argue that ICMP is or isn't a L4 protocol, due to it having it's own protocol
number. At the end of the day, it is ok for disagreement of L4 or not,
because we can agree that it has it's own protocol number. ICMP is
really an assistant to IP, at L3.) But I digress.
With TCP and
UDP, they use port numbers to refer to application layer services such as HTTP
(port 80), TELNET (port 23) and so forth for TCP, and UDP services have their
own well known ports too.
With ICMP,
it doesn't use port numbers, but has ICMP "types" along with ICMP
"codes".
For a full
list of these, you can visit here:
The most
popuar ICMP types are an PING request and reply, which uses an ICMP type 8
(echo-request), and an ICMP type 0 (echo-reply).
Explain Policy
Based Routing
Policy-based
routing provides a tool for forwarding and routing data packets based on
policies defined by network administrators. In effect, it is a way to have the
policy override routing protocol decisions. Policy-based routing includes a
mechanism for selectively applying policies based on access list, packet size
or other criteria. The actions taken can include routing packets on
user-defined routes, setting the precedence, type of service bits, etc.
Various FTP protocols through
PIX/ASA, issues/workarounds/solutions :::: Explained. MUST READ !!
I've been
seeing quite a few cases over various types of FTP through
ASA/PIX.
Here is
something I have compiled which might be helpful for you on such
cases-
Various FTP
forms:
1) Normal
FTP
2) SFTP -
SSH File Transfer Protocol
3) FTPS -
FTP over SSL
i> Implicit FTPS
ii> Explicit FTPS
//// It has
been assumed that FTP inspection is disabled on ASA in
scenarios
below. ////
===========
Normal FTP:
===========
File
Transfer Protocol (FTP) is a network protocol used to transfer data
from one
computer to another through a network, such as the Internet.
->
Inbound FTP Scenarios:
Server----I(ASA)O----client
a) Passive
Client [####FAILS####]
Client connects
to server's public IP on port 21, authenticates. After
this client
enters passive mode using PASV command. When server receives
PASV
command, it generates a message in which client is informed about
the port it
needs to connect to for data transfer. However, server uses
its own
private IP address in the communication and because firewall is
not doing
FTP inspection, it will not modify/translate the payload to
the public
IP of server. Hence, client receives private IP address of
the server
and is unable to connect for data connection.
Solution:
Enable FTP inspection.
b) Active
Client [####WORKS####]
Client
connects to server public IP on port 21, authenticates. Then
client sends
a PORT command. Server calculates the port to which it
needs to
connect to the client and initiates the connection to the port
from
source-port TCP/20 (ftp-data). Outbound connection works fine
because, by
default outbound traffic is permitted on ASA.
FTP
Inspection required: NO.
->
Outbound FTP Scenarios:
client----I(ASA)O----Server
a) Active
Client [####FAILS####]
Client
connects to server public IP on port 21, authenticates. Then
client sends
a PORT command. However, PORT command is being sent using
clients
private IP address and because firewall is not doing FTP
inspection,
it will not modify/translate the payload to the public IP of
server ,
server receives a Private IP address of the Client. Due to
this, server
is unable to initiate data connection to the Client and FTP
fails.
Solution:
Enable FTP inspection.
b) Passive
Client [####WORKS####]
Client
connects to server public IP on port 21, authenticates. After
this client
enters passive mode using PASV command. When server receives
PASV
command, it generates a message in which client is informed about
the port it
needs to connect to for data transfer. Client calculates
this port
and initiates a outbound connection on this new port and
establishes
SSL connection for data transfer. As this is an outbound
connection,
everything works fine.
FTP
Inspection required: NO.
Refer to
following link for detailed explanation of Active/Passive FTP:
====================
SFTP - FTP
over SSH:
====================
SFTP (SSH
File Transfer Protocol), sometimes called Secure File Transfer
Protocol is
a network protocol that provides file transfer and
manipulation
functionality over any reliable data stream. It is
typically
used with version two of the SSH protocol (TCP port 22) to
provide
secure file transfer.
SFTP is
**not** FTP run over SSH, but rather a new protocol designed
from the
ground up by the IETF SECSH working group. The protocol is not
yet an
Internet standard.
Port used:
22(TCP)
Firewall
Perspective of SFTP-
-----------------------------
Now, this is
a firewall friendly stuff, reason being, all communication
is happening
over port 22 (TCP). Hence, depending on setup, don't need
to configure
much on firewall-
Server----I(ASA)O----client
Server
inside, client outside, normally, need to have static mapping for
the server
and open port 22 to the server's mapped IP for traffic to
flow
through.
client----I(ASA)O----Server
Client
inside, server outside, just need to open outbound access and
client
should be able to access SFTP server.
FTP
Inspection required: NO (Not a FTP protocol).
====================
FTPS - FTP
over SSL:
====================
FTPS (S
after FTP) is a super-set of the same FTP protocol, as it allows
for
encryption of the connection over an SSL/TLS encrypted socket. There
are two
modes this can be achieved-
i>
Implicit FTPS
ii>
Explicit FTPS
FTPS as a
whole is not firewall friendly, refer to following scenarios
to
understand why.
------------------
(I) Implicit
FTPS-
------------------
In Implicit
FTPS, basically it is a SSL encrypting socket wrapped around
the entire
communication from the point of connection initiation. To
separate
this from normal FTP, IFTPS was assigned a standard port
990(TCP),
compared to normal FTP which uses 21(TCP). Note that this mode
is far less
common than the explicit mode.
->
Inbound IFTPS Scenarios:
Server----I(ASA)O----client
a) Inbound
Implicit FTPS, Passive Client [####FAILS####]
Client
connects to server's public IP on port 990, authenticates over
TLS (AUTH
command). After authentication for data protection, client
uses command
PROT. After this client enters passive mode using PASV
command.
When server receives PASV command, it generates a message in
which client
is informed about the port it needs to connect to for data
transfer.
However, server uses its own private IP address in the
communication
and because this goes over encrypted session, firewall
cannot
modify/translate the payload to the public IP of server. Hence,
client
receives private IP address of the server and is unable to
connect for
data connection.
Inspection
Required: No, will not help anyways.
Can we make
this work through ASA: No (Opening all the ports to the
server will
not make this work).
Workaround:
Use Active client, see below.
b) Inbound Implicit
FTPS, Active Client [####WORKS####]
Client
connects to server public IP on port 990, authenticates over TLS
(AUTH).
After authentication for data protection uses command PROT, then
client sends
a PORT command over the encrypted session. Server
calculates
the port to which it needs to connect to the client and
initiates
the connection to the port from source-port TCP/989
(ftps-data),
in normal FTP port TCP/20 (ftp-data). Outbound connection
works fine
because, by default outbound traffic is permitted on ASA.
Inspection
Required: No.
->
Outbound IFTPS Scenarios:
client----I(ASA)O----Server
a) Outbound
Implicit FTPS, Active Client [####FAILS####]
Client
connects to server public IP on port 990, authenticates over
TLS(AUTH).
After authentication for data protection uses command PROT,
then client
sends a PORT command over the encrypted session. However,
because this
PORT command is being sent over encrypted session, server
receives a
Private IP address of the Client. Due to this, server is
unable to
initiate data connection to the Client and FTP fails.
Inspection
Required: No, will not help anyways.
Can we make
this work through ASA: No (Opening all the ports to the
server will
not make this work).
Workaround:
Use Active client, see below.
b) Outbound
Implicit FTPS, Passive Client [####WORKS####]
Client
connects to server public IP on port 990, authenticates over
TLS(AUTH).
After authentication for data protection uses command PROT.
After this
client enters passive mode using PASV command. When server
receives
PASV command, it generates a message in which client is
informed
about the port it needs to connect to for data transfer. Client
calculates
this port and initiates a outbound connection on this new
port and
establishes SSL connection for data transfer. As this is an
outbound
connection, everything works fine.
Inspection
Required: No.
-------------------
(II)
Explicit FTPS-
-------------------
Soon after
FTPS was in use some smart people decided it would be best if
we could
have an FTP server that could support unencrypted as well as
encrypted
connections, and do it all over the same port. To accommodate
this the
"explicit" FTPS protocol connection begins as a normal
unencrypted
FTP session over FTP's standard port 21. The client then
explicitly
informs the server that it wants to encrypt the connection by
sending an
"AUTH TLS" command to the server. At that point the
FTPS-enabled
server and the client begin the SSL or TLS handshake and
further
communications happen encrypted. Note that most (if not all)
explicit
FTPS servers can be optionally configured to require
encryption,
so it will deny clients that attempt to transfer data
unencrypted.
Often this can be configured on a user by user basis.
->
Inbound EFTPS Scenarios:
Server----I(ASA)O----client
a) Inbound
Explicit FTPS, Passive Client [####FAILS####]
Client
connects to server public IP on port 21, authenticates over
TLS(AUTH).
After authentication for data protection uses command PROT.
After this
client enters passive mode using PASV command. When server
receives
PASV command, it generates a message in which client is
informed
about the port it needs to connect to for data transfer.
However,
server uses its own private IP address in the communication and
because this
goes over encrypted session, firewall cannot
modify/translate
the payload to the public IP of server. Hence, client
receives
private IP address of the sever and is unable to connect for
data
connection.
Can we make
this work through ASA: Yes. See details below-
If client in
this scenario are capable of using CCC (Clear channel
command),
the FTP client connects to the server, negotiates a secure
connection,
authenticates (sends user and password) and reverts back to
plaintext(control-channel).
Next, enable FTP inspection. Now, when
server
responds with the port client needs to connect to, firewall would
be able to
intercept it and translate IP address in payload and also
open the
connection accordingly.
Note: Not
all FTP clients/servers might support CCC command.
Inspection Required:
Yes, along with CCC command from client.
Workaround:
See above.
b) Inbound
Explicit FTPS, Active Client [####WORKS####]
Client
connects to server public IP on port 21, authenticates over
TLS(AUTH).
After authentication for protection uses command PROT, then
client sends
a PORT command over the encrypted session. Server
calculates
the port to which it needs to connect to the client and
initiates
the connection to the port from source-port 20 (ftp-data).
Outbound
connection works fine because, by default outbound traffic is
permitted on
ASA.
Inspection
Required: No.
->
Outbound EFTPS Scenarios:
client----I(ASA)O----Server
a) Outbound
Explicit FTPS, Active Client [####FAILS####]
Client
connects to server public IP on port 21, authenticates over TLS.
After
authentication for protection uses command PROT P, then client
sends a PORT
command over the encrypted session. However, because this
PORT command
is being sent over encrypted session, server receives a
Private IP
address of the Client. Due to this, server is unable to
initiate
data connection to the Client and FTP fails.
Can we make
this work through ASA: Yes, see explanation of workaround
for
"Inbound Explicit FTPS, Passive Client"
Inspection
Required: See "Inbound Explicit FTPS, Passive Client"
Workaround:
See "Inbound Explicit FTPS, Passive Client"
b) Outbound
Explicit FTPS, Passive Client [####WORKS####]
Client
connects to server public IP on port 21, authenticates over TLS.
After
authentication for protection uses command PROT P. After this
client
enters passive mode using PASV command. When server receives PASV
command, it
generates a message in which client is informed about the
port it
needs to connect to for data transfer. Client calculates this
port and
initiates a outbound connection on this new port and
establishes
SSL connection for data transfer. As this is an outbound
connection,
everything works fine.
Inspection
Required: No.
For more
details about FTP AUTH, PROT, PBSZ, and CCC commands, refer to
following
link:
Feel free to
get in touch with me if you have any questions/concerns.
Also, let me
know if there are any discrepancies in this.
7.How would you
filter the routes being redistributed?
Distribute
Lists
Distribute
lists are access lists applied to the routing process, determining which
networks are allowed into the routing table or included in updates. They
essentially act as a filter.
An access
list applied to routing = distribute lists
When
creating a distribute list, use the following steps:
Step 1. Identify the network addresses to
be filtered and create an ACL – permitting the networks you want to be
advertised.
Step 2. Determine if you want to filter
updates coming into the router or leaving the router.
Step 3. Assign the ACL using the
distribute-list command.
Incoming Distribute Lists:
R1(config-router)#
distribute-list {acl-number | name} in [interface-type number]
Outgoing Distribute Lists:
R1(config-router)#
distribute-list {acl-number | name} out [interface-name | routing-process
| AS-number]
Route Maps
When a
routing update arrives at an interface, a series of steps occur to process it
correctly. The diagram below outlines those steps and serves as a
foundation for the rest of this route redistribution and filtering section.
Route
maps are extremely flexible and are used in a number routing scenarios
including:
•
Controlling
redistribution based on
permit/deny statements
•
Defining
policies in policy-based routing (PBR)
•
Add
more mature decision making to NAT decisions than simply using static translations
•
When
implementing BGP PBR
Route maps
allow an administrator to define specific traffic and then take automated
actions against it to control how routing information is processed and
forwarded. Route maps uses logic similar to if/then statements in simple
scripting.
In route map
terms, it matches traffic against conditions and sets options for
that traffic.
NOTE:
If you have downloaded the Switch Exam Guide, you will notice the similarity
between the syntax structure of route maps and VACLs.
Each
statement in a route map has a sequence number, which is read from lowest to
highest. The router stops reading statements when it reaches its first
matching statement.
Understand
that there is an implicit deny included in all route maps. If traffic
does not match any statement, it is denied.
Basic Route Map Configuration
R1(config)#
route-map {tag} permit | deny [sequence_number]
That is how
all route maps begin. Permit means that any traffic matching the match
statement that follows is processed by the route map. Deny means that any
traffic matching the match statement that follows is NOT processed by
the route map. Know the difference.
Match & Set Conditions
If no match
condition exists, the statement matches anything (similar to a ‘permit any’).
If no set
condition exists, the statement is simply permitted or denied with no
additional changes made.
If multiple
match conditions are used on the same line, it is interpreted as a logical OR.
In other words, if one condition is true, a match is made. For
example, the router would interpret ‘match a b c’ as ‘a or b or c’.
If multiple
match conditions are used on consecutive lines, it is interpreted as a logical
AND. In other words, all conditions must be true
before a match is made. For example, the router would interpret the
following commands as match a and b and c:
route-map
EXAMPLE permit 5
match a
match b
match c
Important route redistribution match conditions
ip address
Refers to an access list that
permits or denies networks
ip address
prefix-list
Refers to a prefix list that permits or denies prefixes
ip next-hop
Refers to an access list that
permits or denies ip next hops IP addresses
ip
route-source
Refers to an access list that permits or denies advertising router
IP addresses
length
Permits or denies packets based
on length (in bytes)
metric
Permits or denies routes with
specific metrics from being redistributed
route-type
Permits or denies redistribution
based on the route type listed
tag
Routes can be labeled with a
number that identifies it
How to set a switch to be the
root in Spanning Tree. There are actually two answers since you can set it as
root or you can just lower the priority.
SW1(config)#spanning
vlan 10 root primary
Lower the
priority on switches
What
command??
Why does OSPF require all traffic
between non-backbone areas to pass through a backbone area (area 0)?
Comparing
three fundamental concepts of link state protocols, concepts that even most
OSPF beginners understand, easily derives the answer to the question.
The first
concept is this:
Every link
state router floods information about itself, its links, and its neighbors to
every other router. From this flooded information each router builds an
identical link state database. Each router then independently runs a
shortest-path-first calculation on its database – a local calculation using
distributed information – to derive a shortest-path tree. This tree is a sort
of map of the shortest path to every other router.
One of the
advantages of link state protocols is that the link state database provides a
“view” of the entire network, preventing most routing loops. This is in
contrast to distance vector protocols, in which route information is passed
hop-by-hop through the network and a calculation is performed at each hop – a
distributed calculation using local information. Each router along a route is
dependent on the router before it to perform its calculations correctly and
then correctly pass along the results. When a router advertises the
prefixes it learns to its neighbors it’s basically saying, “I know how to reach
these destinations.” And because each distance vector router knows only what
its neighbors tell it, and has no “view” of the network beyond the neighbors,
the protocol is vulnerable to loops.
The second
concept is this:
When link
state domains grow large, the flooding and the resulting size of the link state
database becomes a scaling problem. The problem is remedied by breaking the
routing domain into areas: That first concept is modified so that flooding
occurs only within the boundaries of an area, and the resulting link state
database contains only information from the routers in the area. This, in
turn, means that each router’s calculated shortest-path tree only describes the
path to other routers within the area.
The third
concept is this:
OSPF areas
are connected by one or more Area Border Routers (the other main link state
protocol, IS-IS, connects areas somewhat differently) which maintain a separate
link state database and calculate a separate shortest-path tree for each of their
connected areas. So an ABR by definition is a member of two or more areas. It
advertises the prefixes it learns in one area to its other areas by flooding
Type 3 LSAs into the areas that basically say, “I know how to reach these
destinations.”
Wait a minute
– what that last concept described is not link state, it’s distance vector. The
routers in an area cannot “see” past the ABR, and rely on the ABR to correctly
tell them what prefixes it can reach. The SPF calculation within an area
derives a shortest-path tree that depicts all prefixes beyond the ABR as leaf
subnets connected to the ABR at some specified cost.
And that
leads us to the answer to the question:
Because
inter-area OSPF is distance vector, it is vulnerable to routing loops. It
avoids loops by mandating a loop-free inter-area topology, in which traffic
from one area can only reach another area through area 0.
This is my
little gift to you. The next time you are being interviewed by an old coot that
likes to use this question to weed out the cookbook operators from those who
actually understand a little about OSPF, you can bring a smile to his grizzled
face.
What is IGMP
protocol?
Internet
Group Management Protocol, allows internet hosts to multicast. i.e. to send
messages to a group of computers. There may be a group of internet hosts
interested to multicast. IGMP allows router to determine which host groups have
members on a given network segment. It helps to establish group memberships. It
is commonly used for streamlining videos and gaming. The protocol can be
implemented both as a host side and router side. The host side is responsible
to notify its membership in a group. The notification is made to a local
router. This local router (router side) in turn sends out queries.
WSA?
Why web
security ? – layer 7 security.
Known
risks-> loss of productivity, bandwidth consumption, threats from malicious
s/w, data leakage,
Dynamic
nature of web.
Web is not a
safe place.
3 blades in
Wsa:
Acceptable
use policy -> url filtering
Malware
defense-> web reputation, malware scanning, http inspection, l4tm
Data
security -> on box or off box with dlp.
ASync OS
Based on FreeBSD
No shell
access
Proxy
services
Web proxy
Anti virus
url
filtering
policy
management
(remember
CSC module?)
L4TM
Scans
outbound traffic at layer 4
Wire speed
Can disrupt
session: reset for tcp sessions, icmp unreachables for udp sessions, packets
sent using proxy port
Two ways to
redirect traffic to was:
1.Wccp – web
cache control protocol (transparent proxy) (which device to configure wccp)
2. Configure
proxy settings in user browser settings. (explicit forwarding mode) (PAC file?)
initial
login – option for setup one time.
Reporting
tab- scheduled reporting
Web security
manager
Security
manager
Network
System
administration
Proxy
deployments:
Explicit
forward
transparent
tcp miss
-> no cache available tcp-hit-> coming off disk cache.
Grep ->
used to check logs, available in cli
Pac
file-> JavaScript file, can put it on a webserver – proxy autoconfig
Update Pac
file on server- automatically gets updated on client.
Create a Pac file in text….upload it on WSA. Use
browser connection settings “automatic proxy configuration url” to specify WSA
ALLOW
outbound to only come from WSA.
SECURE:
Device
attacks:
Session
spoofing
Capturing
auth
Exploiting
defects; config errors.
Installing
rootkits
Impersonation
(spoofing)
Network
device planes:
Data- user
traffic
Management-
ssh, telnet
Control-
routing protocols, arp, l2 keepalives, cdp
Services-
customer traffic that is being serviced.
Network
foundation protection – done normally at the access layer in 3 layer.
802.1x
vlan
segmentation
anti
spoofing at l2 and l3
device hardening
protecting
stp
protecting
vtp
auth routing
prot
access list
IPS
QOS
Bpduguard
Root guard
Port
security
Vlan maps
Dhcp
snooping
Arp
inspection
IP SPOOFING:
IP SOURCE GUARD, PORT BASED ACCESS CONTROL
STP
SPOOFING: INFLUENCE THE OPERATION OF STP BY BLACKHOLING - BPDUGUARD AND ROOTGUARD
MAC SPOOFING
: STEAL HOST IDENTITIES, POISON CAM TABLE. – USE PORT SECURITY.
SPOOFING
DHCP SERVER: ROGUE DHCP- MAN IN THE MIDDLE- DHCP SNOOPING
ARP
SPOOFING- ARP INSPECTION.
VLAN HOPPING
– DISABLE DTP, DO NOT USE NATIVE VLAN ACROSS TRUNKS
CAM FLOODS-
TURN IT INTO A HUB- LIMIT NUMBER OF MACS, 802.1 X, PORT SECURITY
DHCP
STARVATION – CLIENT STARVING, ALL DHCP LEASES USED UP
Private
VLANs
Interview
Questions for Check Point Firewall Technology
Question 1 – Which of the applications in
Check Point technology can be used to configure security objects?
Answer:
SmartDashboard
Question 2 – Which of the applications in
Check Point technology can be used to view who and what the administrator do to
the security policy?
Answer:
SmartView Tracker
Question 3 – What are the two types of
Check Point NG licenses?
Answer:
Central and Local licenses
Central licenses are the new licensing model for NG and are bound to the
SmartCenter server. Local licenses are the legacy licensing model and are bound
to the enforcement module.
Question 4 – What is the main different
between cpstop/cpstart and fwstop/fwstart?
Answer:
Using cpstop and then cpstart will restart all Check Point components,
including the SVN foundation. Using fwstop and then fwstart will only restart
VPN-1/FireWall-1.
Question 5 – What are the functions of CPD,
FWM, and FWD processes?
Answer:
CPD – CPD is a high in the hierarchichal chain and helps to execute many
services, such as Secure
Internal Communcation (SIC), Licensing and status report.
FWM – The FWM process is responsible for the execution of the database
activities of the
SmartCenter server. It is; therefore, responsible for Policy installation,
Management High
Availability (HA) Synchronization, saving the Policy, Database Read/Write
action, Log
Display, etc.
FWD – The FWD process is responsible for logging. It is executed in relation to
logging, Security
Servers and communication with OPSEC applications.
Question 6 – How to Install Checkpoint
Firewall NGX on SecurePlatform?
Answer:
1. Insert the Checkpoint CD into the computers CD Drive.
2. You will
see a Welcome to Checkpoint SecurePlatform screen. It will prompt you to press
any key. Press any key to start the installation,otherwise it will abort the
installation.
3.You will
now receive a message saying that your hardware was scanned and found suitable
for installing secureplatform. Do you wish to proceed with the installation of
Checkpoint SecurePlatform.
Of the four
options given, select OK, to continue.
4.You will
be given a choice of these two:
SecurePlatform
SecurePlatform Pro
Select
Secureplatform Pro and enter ok to continue.
5.Next it
will give you the option to select the keyboard type. Select your Keyboard type
(default is US) and enter OK to continue.
6.The next
option is the Networking Device. It will give you the interfaces of your
machine and you can select the interface of your choice.
7.The next
option is the Network Interface Configuration. Enter the IP address, subnet
mask and the default gateway.
For this
tutorial, we will set this IP address as 1.1.1.1 255.255.255.0 and the default
gateway as 1.1.1.2 which will be the IP address of your upstream router or
Layer 3 device.
8.The next
option is the HTTPS Server Configuration. Leave the default and enter OK.
9.Now you
will see the Confirmation screen. It will say that the next stage of the
installation process will format your hard drives. Press OK to Continue.
10.Sit back
and relax as the hard disk is formated and the files are being copied.
Once it is
done with the formatting and copying of image files, it will prompt you reboot
the machine and importantly REMOVE THE INSTALLATION CD. Press Enter to Reboot.
Note:
Secureplatform disables your Num Lock by over riding System BIOS settings, so
you press Num LOck to enable your Num Lock.
For the
FIRST Time Login, the login name is admin and the password is also admin.
11.Start the
firewall in Normal Mode.
12.Configuring
Initial Login:
Enter the
user name and password as admin, admin.
It will
prompt you for a new password. Chose a password.
Enter new
password: check$123
Enter new password again: check$123
You may
choose a different user name:
Enter a user
name:fwadmin
Now it will
prompt you with the [cpmodule]# prompt.
13. The next
step is to launch the configuration wizard. To start the configuration wizard,
type “sysconfig”.
You have to
enter n for next and q for Quit. Enter n for next.
14.Configuring
Host name: Press 1 to enter a host name. Press 1 again to set the host name.
Enter host
name: checkpointfw
You can either enter an ip address of leave it blank to associate an IP address
with this hostname. Leave it blank for now.
Press 2 to
show host name. It now displays the name of the firewall as checkpointfw.
Press e to
get out of that section.
15.Configuring
the Domain name.
Press 2 to
enter the config mode for configuring the domain mode. Press 1 to set the
domain name.
Enter domain
name:yourdomain.com
Example:
Enter domain
name: checkpointfw.com
You can
press 2 to show the domain name.
16.
Configuring Domain Name Servers.
You can
press 1 to add a new domain name server.
Enter IP
Address of the domain name srever to add: Enter your domain name server IP
Address HERE.
Press e to
exit.
Network
Connections.
17. Press 4
to enter the Network Connections parameter.
Enter 2 to
Configure a new connection.
Your Choice:
1) eth0
2) eth1
3) eth2
4) eth3
Press 2 to
configure eth1. (We will configure this interface as the inside interface with
an IP address of 192.168.1.1 and a subnet mask of 255.255.255.0. The default
gateway will be configured as 1.1.1.1.)
Press 1)
Change IP settings.
Enter IP
address for eth1 (press c to cancel): 192.168.1.1
Enter network Mask for interface eth2 (press c to cancel): 255.255.255.0
Enter broadcast address of the interface eth2 (leave empty for default): Enter
Pres Enter
to continue….
Similarly
configure the eth2 interface, which will be acting as a DMZ in this case with
10.10.10.1 255.255.255.0.
Press e to
exit the configuration menu.
18.Configuring
the Default Gateway Configuration.
Enter 5
which is the Routing section to enter information on the default gateway
configuration.
1.Set
default gateway.
2.Show default gateway.
Press 1 to
enter the default gateway configuration.
Enter
default gateway IP address: 1.1.1.2
19. Choose a
time and date configuration item.
Press n to
configure the timezone, date and local time.
This part is
self explanatory so you can do it yourself.
The next
prompt is the Import Checkpoint Products Configuration. You can n for next to
skip this part as it is not needed for fresh installs.
20. Next is
the license agreement.You have the option of V for evaluation product, U for
purchased product and N for next. If you enter n for next. Press n for next.
Press Y and
accept the license agreement.
21.The next
section would show you the product Selection and Installation option menu.
Select
Checkpoint Enterprise/Pro.
Press N to
continue.
22. Select
New Installation from the menu.
Press N to
continue.
23. Next
menu would show you the products to be installed.
Since this
is a standalone installation configuration example, select
VPN Pro and
Smartcenter
Press N for
next
24.Next menu
gives you the option to select the Smartcenter type you would like to install.
Select
Primary Smartcenter.
Press n for
next.
A validation
screen will be seen showing the following products:
VPN-1 Pro
and Primary Smartcenter.
Press n for
next to continue.
Now the
installation of VPN-1 Pro NGX R60 will start.
25. The set
of menu is as follows:
Do you want
to add license (y/n)
You can
enter Y which is the default and enter your license information.
26. The next
prompt will ask you to add an administrator. You can add an administrator.
27.The next
prompt will ask you to add a GUI Client. Enter the IP Address of the machine
from where you want to manage this firewall.
28. The
final process of installation is creation of the ICA. It will promtp you for
the creation of the ICA and follow the steps. The ICA will be created. Once the
random is configured ( you dont have to do anything), the ICA is initialized.
After the
ICA initialized, the fingerprint is displayed. You can save this fingerprint
because this will be later used while connecting to the smartcenter through the
GUI. The two fingerprints should match. This is a security feature.
The next
step is reboot. Reboot the firewall.
Question 7 – What are the types of NAT and
how to configure it in Check Point Firewall?
Answer:
Static Mode – manually defined