Network Security: Attacks and Defense Strategies
UDP Protocol and the Ping-Pong Effect
In the UDP protocol, port 13 is used for the “daytime” service. A time server that receives a packet to its UDP port 13 will always return a packet to the requester’s IP and UDP port, indicating the server’s current time. If the server does not check the requester’s port number, an attacker can send out one packet to a daytime server and cause a ping-pong effect between two daytime servers.
The ping-pong attack in UDP-based services, such as the Daytime service on port 13, exploits the lack of source port validation in the server’s response. In this attack, the attacker sends a UDP packet to Server A (Daytime service on port 13) with a spoofed source IP address of Server B and a random source port. Server A, not checking the source port, sends a response to the spoofed IP (Server B) and port. Server B, receiving this response, assumes it came from Server A and replies with its own time to Server A’s IP address and port 13. This creates an endless loop of traffic between the two servers, each continuously responding to the other. The result is excessive consumption of network resources, CPU, and bandwidth on both servers, which can lead to service degradation or a denial of service (DoS).
The attack is possible because UDP is connectionless and does not verify the source of packets, allowing the attacker to easily spoof the sender’s information and cause unintended interactions between servers.
TCP: Link-Layer vs. End-to-End Reliability
It is critical to have both link-layer and end-to-end reliability in computer networks to provide robust and trustworthy data delivery. Each layer serves a specific role, and their combined operation is critical for dealing with various network issues.
Reliability at the Link Layer (Data Link Layer)
The link layer ensures reliability within a local network segment. It ensures that data frames are delivered accurately over the physical medium, often using methods such as error checking, retransmissions, and flow control. The reliability of the link layer ensures that data is properly transmitted over a short, immediate distance.
End-to-End Reliability (Transport Layer)
End-to-end reliability is responsible for ensuring data transmission over the whole network, which can span several intermediary nodes and various link-layer technologies. It is normally maintained by the transport layer. This layer employs error detection and correction, as well as retransmission techniques such as acknowledgments and timeouts. It ensures the integrity, order, and completeness of data from the source to the destination.
The reason for having both is because link-layer reliability functions within a constrained network segment, while end-to-end reliability operates over the entire network. When these are combined, they provide total protection against transmission errors, delays, losses, and reordering difficulties. End-to-end dependability maintains the entire transmission chain, catering to long-distance and cross-network concerns, whereas link-layer reliability is the first line of defense, handling local issues.
Using both layers makes network communication more resilient, efficient, and error-resistant, resulting in a more dependable and efficient data transmission system.
Off-Path Attacks on TCP
In an off-path attack on TCP, the attacker does not directly interact with the victim or the server but instead uses indirect methods to gather information about an active connection. One effective technique is a side-channel attack, where the attacker sends a series of SYN packets to the server, each targeting different port numbers. The server responds with a Challenge ACK (or simply an ACK) if the port number is correct, or it will silently ignore incorrect ports. By analyzing the server’s responses, the attacker can infer which port is valid for the active TCP connection.
For example, if the attacker sends 100 SYN packets to different ports and receives 99 Challenge ACKs, the attacker knows that one of those ports is the correct one for the victim’s connection.
Additionally, the attacker can use timing attacks to determine if a TCP connection exists between the victim and the server. The attacker sends a probe to the server or victim and measures the response time. If the response is immediate, it may indicate that no active TCP connection is in progress. However, if there is a delay, it suggests that the victim is involved in an active connection. This delay can occur because the server or victim needs to process the incoming packets, verify their correctness, or potentially handle retransmissions, depending on network conditions.
Together, the side-channel attack (gathering information through SYN responses) and the timing attack (measuring response delays) form a smart and specialized attack. By combining these methods, the attacker can not only determine whether an active TCP connection exists but also narrow down potential connection details, such as the port number involved, without ever directly interacting with the victim or server. This makes it possible for an attacker to compromise some of the connection parameters and potentially exploit weaknesses in the TCP handshake or session establishment process.
The side-channel attack allows the attacker to probe the network without being detected by the victim or server, while the timing attack enables the attacker to infer additional connection details. When used together, these two methods can bypass typical defenses, such as firewalls or packet filters, making it easier to perform sophisticated attacks on TCP connections.
Blind Reset Attacks on TCP
A blind reset attack on TCP is a type of attack where a malicious node sends forged TCP RST packets to disrupt an active connection between two parties, without being directly involved in the communication. The attacker needs to know the IP addresses, port numbers, and sequence numbers involved in the TCP connection to conduct the attack.
- Port Number Guessing: The attacker first sends SYN packets to the server, targeting different port numbers. The server responds with ACK packets for valid ports. By analyzing these responses, the attacker can deduce the correct port number of the victim.
- Sequence Number Guessing: Once the correct port number is identified, the attacker sends forged RST packets with various sequence numbers. If the sequence number matches the current active connection’s sequence number, the server or client will terminate the connection, believing the RST packet is legitimate.
- Attack Execution: The attacker sends multiple RST packets with different sequence numbers, trying to guess the correct one. When the attacker guesses correctly, the server or client terminates the connection, causing a disruption.
This attack is called “blind” because the attacker does not observe the actual communication but relies on guessing the correct parameters to send a valid RST packet.
IP Fragmentation and Resource Exhaustion Attacks
IP fragmentation, if implemented poorly, can be used to conduct resource exhaustion attacks (DoS). The software will look at the offset and packet length of the fragment and allocate enough memory to hold the whole packet.
The fragment reassembly procedure is a stateful procedure in an otherwise stateless protocol. As a result, resource exhaustion attacks can be used to take advantage of them. To prevent reassembly, an attacker can create a series of fragmented packets with one fragment missing from each packet. As a result, the destination node is resource-exhausted by this attack, potentially depriving other flows of reassembly services. When necessary, flushing fragment reassembly buffers can be used to prevent this kind of attack, although doing so may result in the loss of some legitimate fragments.
To execute these assaults, the attacker does not need to be able to establish two-way communication. Both IPv4 and IPv6 fragment reassembly are affected by these attacks. The worst-case scenario for an attacker would be to deliver a stream of specially constructed fragments at a low packet rate, thus using a lot of CPU (Central Processing Unit) resources. An attacker can deliver a stream of specially constructed pieces through other attack vectors, which could use a lot of CPU. These attacks may cause a system to become temporarily unresponsive or inaccessible via network interfaces.
For IP (Internet Protocol) fragmentation, the attacker will send one packet to the server. Let us note that all fragments of a packet are reassembled at the destination. For instance, let us assume that the attacker might send a 120-byte packet that uses 64K bytes of memory for the victim. So, this means that the server is expecting 64k packets of memory from the node. The attacker will send out one fragment and the server will receive that 120-byte fragment. The first fragment will get reassembled. The next 120-byte fragment that will get received by the server from the attacker will get reassembled at the very end position. The server will reserve the 64k block of memory for a certain period since it is the packet that the attacker sends to the server. The fragments between the first and the second fragment that were supposed to come, never actually come to the server, resulting in the wastage of memory for the server. Hence, the attacker succeeds in the DOS attack for the resource exhaustion attack because there could not only be one block of memory but there could be 50 blocks of memory at a time.
Content Delivery Networks (CDN) and Data Encryption
To improve end-user experience and reduce core network overhead, the concept of a Content Delivery Network (CDN) was proposed. The basic idea is to set up many content buffers that are close to end-users. Akamai is one of the major players in the CDN market. To prevent network eavesdroppers from getting access to data during transmission, data encryption is often used in CDN.
Akamai’s Approach to Encrypted Content Delivery
Akamai, as a major content delivery network (CDN) provider, ensures secure content delivery through encryption at two critical segments: from the content source to Akamai and from Akamai to the end-user. Below is a detailed explanation of how this is achieved:
1. From Content Source to Akamai
When content (such as Personally Identifiable Information, PII) is transmitted from the Hulu customer’s origin server to Akamai, it is encrypted using Secure Sockets Layer (SSL). This ensures that sensitive data remains secure during its transit to Akamai. Upon receiving the encrypted data, the Akamai edge server decrypts it, temporarily stores the content in unencrypted form (in the cache), and prepares it for the next step.
2. From Akamai to End-User
Once the content is cached on Akamai’s server, it is re-encrypted using a fresh SSL session key that corresponds to the end-user’s request. The encrypted data is then transmitted securely over the internet to the end-user’s device. When the data reaches the end-user, it is decrypted by their client software. This two-step encryption process ensures that content remains encrypted during transmission and prevents eavesdropping by unauthorized parties.
3. Content Verification
Even though content passes through Akamai’s servers, the end-user can verify that the content is originating from the original source (e.g., Hulu). Akamai’s edge servers function as proxy servers and forward user requests to the origin site (Hulu) using standard HTTPS protocols. Importantly, Akamai servers are configured with SSL certificates in Hulu’s name, ensuring that the data is recognized as coming from Hulu. End-users can verify the authenticity of the connection by inspecting the SSL certificate of the server they are connected to. The certificate should match the domain name of the content origin (e.g., Hulu), indicating that the content is being securely delivered by the correct source. This ensures trust and prevents content manipulation.
4. Encryption Key Management
Akamai’s approach to key management ensures that data is encrypted securely. Although it is possible for a shared encryption key to be used across multiple end-user sessions, Akamai generally recommends using individual keys for each end-user account to prevent unauthorized access. Each end-user session will typically be associated with a unique session key, which is used for encrypting and decrypting the data between Akamai and the end-user. Akamai does not directly manage the encryption keys of end-user accounts but advises clients to handle key security with care. The session keys used for communication between Akamai and the end-user are typically ephemeral and are unique to each session. This provides an additional layer of security by ensuring that a single session key is not used across multiple sessions or users.
Jamming Attacks in Software Defined Networks (SDN)
In Software Defined Networks, the controller needs to collect a lot of information from the switches that it manages and issue commands to them. Therefore, it is essential to keep the control channel open and available between the controller and the switches. In the original implementation of SDN, the control channel and data channels share the same link, which leads to the potential of jamming attacks upon the controller.
1. How is the Jamming Attack Conducted?
Let us refer to the diagram for a visual understanding. Consider an environment with an SDN (Software-Defined Networking) controller, S1–S5 data plane devices, and control channels for sending control data to these devices. The data and the control data are shared between the control channels where the SDN controller sends commands to the devices. Being one of the hosts, the attacker makes a new host and passes the data to it. The attacker sends a lot of data traffic to some devices for congestion through the shared link that is spread across all the devices to the limit where the SDA controllers won’t be able to send the control data to the devices. As a result, there will be a significant delay when the S1 device tries to communicate with the SDN controller. A jamming attack is carried out in this manner.
2. How Can an Attacker Identify Shared Links?
Let us talk about Adversarial Path Reconnaissance. An attacker can identify the shared links between data channels and control channels through the control channel delays. The attacker then detects a significant variation in the control channel latency between S2 and S3 and S2 and S1 to reach the SDN controller after sending little amounts of traffic initially followed by large amounts of traffic data to S2. That’s how an attacker can identify a shared link.
Control data travels through the S2, S1, S5, and SDN controller routes if there is no control data traffic. There is only data traffic between S2 and S3. There is therefore very little variance in the control channel latency to reach the SDN controller when the attacker sends a little traffic and subsequently sends a lot of traffic data for congestion to S2. An attacker can determine there isn’t a shared link in this way.
The control path delays can be measured based on the first 2 packets of the new flow. The data plane devices ask the SDN controller what the new flow rules are and receive an acknowledgment accordingly from the controller. So, RTT 1 (Round-Trip Time) is calculated as the sum of the time of both the data that is sent and the control data while RTT 2 is the data that is already sent to the host without having to go to the SDN controller. Therefore, the control path delay can be calculated by subtracting RTT 2 from RTT 1 (RTT 1 – RTT 2).
3. Defenses Against Jamming Attacks
- By the implementation of a Priority Queue (PQ) or Weighted Robin Round Queue (WRR) in switches to manage the delivery of control traffic with a high priority. This is because the majority of commercial SDN switches support one of the two queues mentioned. Hence, it can effectively protect control traffic against malicious data traffic.
- By employing SDN OpenFlow meter tables in the hardware switches to proactively reserve bandwidth for control traffic. Control traffic can be well protected by reserving enough bandwidth. Even while there is a significant amount of free bandwidth, it has the significant drawback that the reserved bandwidth cannot be used by other traffic.
- By intentionally introducing random delays during the installation of flow rules, thereby, interrupting path reconnaissances and causing inaccurate measurements of the control paths. However, adding random delays has an impact on how all network flows’ rules are installed. It is particularly harmful to mice with delay-sensitive flow patterns.
Virtual Private Networks (VPN)
1. Benefits of Implementing a VPN
- Bypassing Geo-Restrictions: To limit access to certain platforms or content, geo-restrictions are a technique used to manage content. Users must use a VPN to disguise their real IP address and move them into an area that is acceptable to the platform to access such geographically restricted content. Some entertainment platforms are also geographically restricted; however, users can access these platforms at any time by using a VPN.
- Enhances User’s Online Privacy: A website that a user logs onto can see his actual IP address. A digital trace is always left when the internet is used. The ISPS can track and sell this type of data to marketers who can use the user’s footprint to develop a precise customer profile about the user and then target him with relevant advertisements. When using a VPN, the user’s IP address is hidden, protecting him from this type of risk as he browses the internet.
- Protects from Cyber-Attacks: Cybercriminals can spoof Wi-Fi networks to fool people into connecting, and they frequently exploit open Wi-Fi to do so. They can steal users’ private data, including bank and credit card information, when a user interacts with them. An individual should make use of a VPN to avoid this. By encrypting the user’s connection, a VPN protects all of his sensitive information and renders it meaningless to hackers.
- Prevents Bandwidth Throttling: The bandwidth is restricted by bandwidth throttling. The ISPs typically perform this once a week or once a month for consumers who use the internet frequently, which may force them to buy more expensive data plans and subscriptions. By utilizing a VPN, a user can prevent the ISP from tracking his online activities and bandwidth usage. This implies that even if the user consumes more bandwidth, the ISP won’t notice and won’t do anything about it.
2. Potential Drawbacks of VPNs
- Slows Down the Internet Speed: To secure the user’s data, a VPN must go through an encryption procedure, which takes time and could negatively affect the user’s online experience. Some VPNs are also significantly slower than others, so a user should pick one that delivers fast connectivity without compromising security. The three most likely reasons for slower internet speeds are rerouting, data encryption, and the maintenance of a fixed amount of bandwidth. Therefore, it would be beneficial if the user thought about using VPNs with less connection loss and quick speeds.
- Stronger Anti-VPN Software: Some nations consider the use of virtual private networks, or VPNs, to be unlawful. To prevent people from accessing information, websites, and other prohibited content, anti-VPN software has been developed. Users should make an effort to use a VPN that has a chance of defeating these powerful VPN blocks. The first step in overcoming this problem is to use a premium VPN.
- Connection Dropping: The loss of a VPN user’s connection is among the user’s top concerns. This is because once the connection is broken, the encrypted connection is rendered useless, the user’s IP address is made public, and his anonymity is destroyed. A kill switch option is available on some VPNs. This is an essential feature of VPNs since it immediately disconnects the user from the internet if the VPN encrypted connection is lost, protecting your safety, security, and anonymity.
- Configuration Difficulty: Not all VPN services are properly configured, and a VPN that is configured incorrectly could easily leave personal data unprotected. IP and DNS leaks are caused by improperly set up VPNs. Due to the intricate physical components involved in the connection, physically setting up and configuring a VPN connection is also difficult. A user-friendly provider will make using a VPN easy, enjoyable, and painless.
Using VPN to Access Blocked Websites
The student’s device and each website he visits have a unique IP address. An IP address is a source used by most services to determine one’s location and provide him with an experience accordingly. The UNCC website appears to be blocked because the organization (university) or ISP (Internet Service Provider) may have set up specific verification and region checks or restricted access from outside the campus. This is done through a combination of content filtering software and blacklists consisting of website hostnames, URLs, and IP addresses. Usually, the outgoing requests are examined at the gateway level through a proxy server, firewall, or router having content filtering capabilities or blacklists.
This can be bypassed when all of the traffic from his device is being directed via a VPN. VPN will help him bypass restrictions by changing his IP address. Changing his IP address would make him appear to be in a different location which tricks the UNCC organization into being present inside the campus resulting in unblocking of the UNCC content. The traffic won’t pass via the network adapter directly when a user of the VPN Client wants to access a restricted website since it will be blocked. Instead, VPN Client will route packets to the blocked site through the VPN tunnel and deliver them to VPN Server. Once there, the VPN Server will direct them to their target destination. The reply packets will finally reach the VPN Server, which will then route them through the VPN tunnel and deliver them to the VPN Client. In this manner, the VPN enables the VPN Client to bypass firewalls. This is how a student can complete his final exam from a coffee shop using a VPN.
A network communications path between two networks is an IP tunnel. By encapsulating the packets of the other network protocol, it is utilized to transport that protocol. When there is no native routing connection between two disjointed IP networks, IP tunnels are frequently used to connect them. They do this by using an intermediate transport network and an underlying routable protocol. Every IP packet used in IP tunneling, together with the source and destination IP networks’ addressing data, is wrapped inside a different packet format specific to the transit network. Gateways are used to establish the endpoints of the IP tunnel over the transit network at the boundaries between the source network and the transit network as well as the transit network and the destination network. To provide a standard IP route between the source and destination networks, the IP tunnel endpoints transform into native IP routers. Refer to the diagram for an easier understanding.
Detecting and Mitigating Malware in SDN
1. Information Needed for Anomaly Detection
Three layers make up a typical SDN architecture: switches, a controller, and a firewall. SDN controllers can respond to anomalies by sending particular network traffic to firewalls. An unknown flow’s packet is routed to the firewall by the SDN controller when a malware packet with 240KB of data arrives at the switch. The firewall determines whether or not a packet contains malware by examining its headers and contents. If the packet is deemed malicious, it alerts the SDN controller. The hash of the data package, the similarities between them, and the size of the TCP packet that is being sent must all be monitored by the SDN controller. By noticing similarities in the payload’s content, they can also recognize malware. Multiple packets with the same destination IPs in the header can be used to identify DDOS attempts. Similarities in the content are crucial for detecting malware. This makes it very simple for the SDN controller to find the issue.
2. Actions to Stop Malware Propagation
When an SDN network detects an unknown flow using a firewall. The switch that detects the unknown flow packet forwards the packet to the firewall through the SDN controller. The firewall makes decisions on whether to install flow rules into each of the switches. The firewall regards it as a malware packet with suspicious patterns. So, the next time a malicious packet is seen, the switches will drop that malicious packet.
When an SDN network detects an unknown flow using a DDOS (Distributed denial of service) Attack Mitigator. So, the unknown flows go into the switches and the switches will observe it happening. The switches will communicate through the SDN Controller with the DDOS-Attack Mitigator that they’ve received an unknown, large number of flows. So, the DDOS Attack Mitigator will make the termination through the SD controller which observes the large spike and the number of flows. DDOS Attack Mitigator will randomly drop the packets through a separate path, so the incoming packets are forwarded to a honeypot for observing the behaviors of the attack. This is how we can detect and mitigate attacks.
Real-World Examples of DDoS Attacks
1. Ping of Death (PoD)
Ping of death (PoD) is a specific kind of protocol DDoS attack in which the attacker uses the straightforward ping command to transmit inaccurate or bigger packets to crash, destabilize, or freeze the intended machine or service. While PoD attacks take use of potentially patched vulnerabilities in the target systems. However, the attack is still applicable and hazardous on a system without a patch. A properly formatted IPv4 packet has an IP header and is 65,535 bytes long with an 84-byte payload. When faced with larger packets, many older computer systems could not manage them and would crash. Because sending a ping packet longer than 65,535 bytes is against the Internet Protocol, attackers frequently send malicious packets in sections. When the target system tries to put the pieces back together and creates a larger packet, memory overflow may happen, which could lead to a crash and other system issues.
Mitigation
Adding checks to the reassembly process to ensure that the maximum packet size constraint will not be exceeded after packet recombination is one way to thwart an attack. Adding checks to the reassembly process to ensure that the maximum packet size constraint will not be exceeded after packet recombination is one way to thwart an attack.
2. SYN Flood Attack
Firstly, a TCP connection is established i.e., a three-way handshake process is performed by the client and the server. By sending the server an SYN message, the client initiates a connection request. The server acknowledges by returning to the client an SYN-ACK message. Connection is made when the client replies with an ACK message. An SYN flood attack is a form of denial-of-service attack in which the attacker rapidly initiates a TCP connection with an SYN request to a server and refuses to respond to the server’s SYN+ACK.
In an SYN flood attack, the attacker repeatedly sends SYN packets to all of the server’s ports while frequently utilizing a fictitious IP address. The server receives numerous requests to establish contact that seems to be valid, but it is uninformed of the assault. Each open port sends an SYN-ACK packet in response to each attempt. If the IP address is faked, the malicious client either never receives the SYNACK in the first place or fails to transmit the required ACK. In any case, the server that is being attacked will wait a while for the acknowledgment of its SYN-ACK message.
Mitigation
- One solution is to increase the backlog queue size. Increase the operating system’s maximum number of half-open connections to handle a high stream of SYN packets. The system must reserve extra memory resources with the capacity to handle all new requests to increase the maximum backlog. System performance will suffer if there is not enough RAM to accommodate a larger backlog queue, but this is preferable to a denial-of-service attack.
- Another solution is to recycle the oldest half-open TCP connection. The oldest half-open connection in this method is overwritten once the backlog has been completely filled. This strategy only functions when connections can fully establish faster than the amount of time it takes for fake SYN packets to fill the backlog. If the attack volume is raised or the backlog is too tiny, it fails.
- SYN cookies are another way to mitigate this attack. It employs cryptographic hashing, and the server generates a sequence number (seqno) from the client’s IP address, port number, and perhaps other distinctive identifying data before sending its SYNACK response. When the client responds to this crafted response, a TCB (Transmission Control Block) is formed for the corresponding TCP connection. Utilizing this method prevents TCP SYN floods from using up all of the server’s resources.