homework6

docx

School

Boston University *

*We aren’t endorsed by this school

Course

535

Subject

Electrical Engineering

Date

Jan 9, 2024

Type

docx

Pages

4

Uploaded by SuperHumanWater16962

Report
535-Homework6 Chapter 5: 17. Describe two major differences between the ECN method and the RED method of congestion avoidance. First, the ECN method explicitly sends a congestion notification flag (ECN bit) to indicate congestion to the receiving devices. While in the RED method, the network devices just drop packets randomly before the queue becomes full. The dropping probability is adjusted based on the average queue length, which serves as an implicit congestion indicator. Second, ECN drops a packet when there is no buffer space, but the RED method reacts to congestion after it occurs, and tries to alleviate congestion by reducing the sending rate of devices. 24. The network of Fig. 5-32 uses RSVP with multicast trees for hosts 1 and 2 as shown. Suppose that host 3 requests a channel of bandwidth 2 MB/sec for a flow from host 1 and another channel of bandwidth 1 MB/sec for a flow from host 2. At the same time, host 4 requests a channel of bandwidth 2 MB/sec for a flow from host 1 and host 5 requests a channel of bandwidth 1 MB/sec for a flow from host 2. How much total bandwidth will be reserved for these requests at routers A, B, C, E, H, J, K, and L? A: 2 B:0 C:1 E:3 H:3 J:3 K:2 L:1 Chapter 6: 1. In our example transport primitives of Fig. 6-2, LISTEN is a blocking call. Is this strictly necessary? If not, explain how a nonblocking primitive could be used. What advantage would this have over the scheme described in the text? The LISTEN call causes the program execution to halt until a specific event occurs. While this is sometimes necessary, it can also result in inefficient use of resources and slow program execution. A non-blocking primitive allows the program to continue executing even while waiting for an event to occur. Thus, it is very flexible. 8. Explain the differences in using the sliding window protocol at the link layer and at the transport layer in terms of protocol timeouts. The timeout period for the sliding window protocol at the link layer is typically
short. This is because the link layer operates in a low-level, real-time environment where packet loss or corruption is relatively common. Therefore, the protocol needs to be able to quickly detect and recover from lost packets in order to maintain good network performance. The timeout period for the sliding window protocol at the transport layer is typically longer than at the link layer. This is because the transport layer operates in a higher-level, less time-sensitive environment where packet loss or corruption is less common. Therefore, the protocol can afford to wait longer before assuming that a packet has been lost or dropped. 26. A process on host 1 has been assigned port p, and a process on host 2 has been assigned port q. Is it possible for there to be two or more TCP connections between these two ports at the same time? No, it is not possible. Only (1,p) –(2,q) is the connection between the two ports. 30. Give a potential disadvantage when Nagle’s algorithm is used on a badly congested network. Nagle's algorithm introduces a delay in transmitting packets until either the current packet has been acknowledged by the receiver or the buffer has accumulated enough data to fill a full-sized packet. While this delay is generally beneficial for reducing the number of small packets transmitted over the network, it can cause additional delay in transmitting data, which can be problematic on a badly congested network. Therefore, on a congested network, delays can cause data packets to arrive out of order or too late to be useful, leading to increased retransmission and slower overall network performance. 32. Consider the effect of using slow start on a line with a 10-msec round-trip time and no congestion. The receive window is 24 KB and the maximum segment size is 2 KB. How long does it take before the first full window can be sent? Initially, the sender starts with a congestion window of one segment, which is 2 KB. The sender sends this segment and waits for an acknowledgement (ACK) from the receiver. When the ACK is received, the sender increments its congestion window by one segment, making the congestion window equal to 2 segments (4 KB). The sender sends two more segments and waits for ACKs
for each segment. When both ACKs are received, the sender increments its congestion window to 4 segments (8 KB). The sender sends four more segments and waits for ACKs for each segment. When all four ACKs are received, the sender increments its congestion window to 8 segments (16 KB). The next window is 24KB, and so the time is 40msec. 35. If the TCP round-trip time, RTT, is currently 30 msec and the following acknowledgements come in after 26, 32, and 24 msec, respectively, what is the new RTT estimate using the Jacobson algorithm? Use α = 0. 9. SRTT = αSRTT + ( 1 −α ) R (1) 26msec: RTT = 0.9 × 30 + ( 1 0.9 ) × 26 = 29.6 (2) 32msec: RTT = 0.9 × 29.6 + ( 1 0.9 ) × 32 = 29.84 (3) 26msec: RTT = 0.9 × 29.84 + ( 1 0.9 ) × 24 = 29.256 44. To get around the problem of sequence numbers wrapping around while old packets still exist, one could use 64-bit sequence numbers. However, theoretically, an optical fiber can run at 75 Tbps. What maximum packet lifetime is required to make sure that future 75-Tbps networks do not have wraparound problems even with 64-bit sequence numbers? Assume that each byte has its own sequence number, as TCP does. 75 Tbps = 9.375 × 10 12 bytes / second 2 64 ÷ 9.375 × 10 12 1967653 second Therefore, less than 1967653 second will prevent the problem. 47. Calculate the bandwidth-delay product for the following networks: (1) T1 (1.5 Mbps), (2) Ethernet (10 Mbps), (3) T3 (45 Mbps), and (4) STS-3 (155 Mbps). Assume an RTT of 100 msec. Recall that a TCP header has 16 bits reserved for Window Size. What are its implications in light of your calculations? (1) T1(1.5Mbps): 1.5 Mbps× 100 msec 8 = 18.75 KB (2) Ethernet(10 Mbps): 10 Mbps× 100 msec 8 = 125 KB (3) T3(45Mbps): 45 Mbps× 100 msec 8 = 562.5 KB (4) STS-3(155Mbps): 155 Mbps× 100 msec 8 = 1.9375 MB
Your preview ends here
Eager to read complete document? Join bartleby learn and gain access to the full version
  • Access to all documents
  • Unlimited textbook solutions
  • 24/7 expert homework help
The sender can only send up to 64KB before waiting for an acknowledgment. This can limit the throughput of the connection if the bandwidth-delay product is larger than the window size. If using Ethernet, T3 and STS-3, the sender can’t transmit continuously and keep the pipe full. Day 1: Remember in Homework 3, using Slide 75 of the lecture slides from the file week02ab-22-0606.pdf as a checklist, you were asked to determine the polices for the mechanisms in HDLC? For this problem, do the same exercise, except this time determine what the polices are for the mechanisms in TCP. In addition, how well does TCP follow the principles we have found for good protocol design? (1) flow control (2) congestion control (3) error control (4) retransmission control