Characterizing Network Traffic for Optimal Network Design

Characterizing Network Traffic

Characterizing Traffic Flow

Characterizing traffic flow is the process of identifying sources and destinations of network traffic and analyzing the direction and symmetry of data traveling between sources and destinations.

  • Flow can be bidirectional and symmetric. (Both ends of the flow send traffic at about the same rate.)
  • In other applications, the flow can be bidirectional and asymmetric.
  • Client stations send small queries and servers send large streams of data.
  • In a broadcast application, the flow is unidirectional and asymmetric.

Identifying Major Traffic Sources and Stores

  • First identify user communities and data stores for existing and new applications.
  • A user community is a set of workers who use a particular application or set of applications.
  • A user community can be a corporate department or set of departments.
  • As more corporations use matrix management and form virtual teams to complete ad-hoc projects, it becomes more necessary to characterize user communities by application and protocol usage rather than by departmental boundary.

Documenting Traffic Flow on the Existing Network

  • Documenting traffic flow involves identifying and characterizing individual traffic flows between traffic sources and stores.
  • RFC 2722 describes an architecture for the measurement and reporting of network traffic flows, and discusses how the architecture relates to an overall traffic flow architecture for intranets and the Internet.
  • Measuring traffic flow behavior can help a network designer determine which routers should be peers in routing protocols that use a peering system, such as the Border Gateway Protocol (BGP).
  • Measuring traffic flow behavior can also help network designers do the following:
    • Characterize the behavior of existing networks
    • Plan for network development and expansion
    • Quantify network performance
    • Verify the quality of network service
    • Ascribe network usage to users and applications
  • An individual network traffic flow can be defined as protocol and application information transmitted between communicating entities during a single session.
  • A flow has attributes such as:
    • direction,
    • symmetry,
    • routing path and routing options,
    • number of packets, number of bytes, and
    • addresses for each end of the flow.
  • The simplest method for characterizing the size of a flow is to measure the number of Kbytes or Mbytes per second between communicating entities.
  • To characterize the size of a flow, use a protocol analyzer or network management system to record load between important sources and destinations.

Characterizing Types of Traffic Flow for New Network Applications

  • Characterize by direction and symmetry.
  • Direction specifies whether data travels in both directions or in just one direction.
  • Direction also specifies the path that a flow takes as it travels from source to destination through an internetwork.
  • Symmetry describes whether the flow tends to have higher performance or QoS requirements in one direction than the other direction.
  • Characterize network traffic flow by classifying applications as supporting one of a few well-known flow types:
    • Terminal/host traffic flow
    • Client/server traffic flow
    • Peer-to-peer traffic flow
    • Server/server traffic flow
    • Distributed computing traffic flow

Terminal/Host Traffic Flow

  • Terminal/host traffic is usually asymmetric.
  • The terminal sends a few characters and the host sends many characters.
  • Telnet is an example of an application that generates terminal/host traffic.
  • With some full-screen terminal applications, such as some IBM 3270-based terminal applications, the terminal side sends characters typed by the user and the host side returns data to repaint the screen.
  • More modern terminal applications just send changes to the user’s screen, thus reducing network traffic.
  • Terminal/host traffic flows are less prevalent on networks than they once were, but they have not disappeared
  • Thin clients, which have become quite popular, can behave like terminal/host applications.

Client/Server Traffic Flow

  • Client/server traffic is the best known and most widely used flow type.
  • Clients send queries and requests to a server. The server responds with data or permission for the client to send data.
  • The flow is usually bidirectional and asymmetric.
  • Requests from the client are typically small frames, except when writing data to the server, in which case they are larger.
  • Responses from the server range from 64 bytes to 1500 bytes or more, depending on the maximum frame size allowed for the data link layer in use.
  • Hypertext Transfer Protocol (HTTP) is probably the most widely used client/server protocol.
  • The flow for HTTP traffic is not always between the web browser and the web server because of caching.
  • A cache engine is software or hardware that makes recently accessed web pages available locally, which can speed the delivery of the pages and reduce WAN bandwidth utilization.
  • A cache engine can also be used to control the type of content that users are allowed to view.

Thin Client Traffic Flow

  • A special case of the client/server architecture is a thin client, which is software or hardware that is designed to be particularly simple and to work in an environment where the bulk of data processing occurs on a server.
  • With thin client technology, (also known as server-based computing), user applications originate on a central server
  • A downside of thin client technology is that the amount of data flowing from the server to the client can be substantial, especially when many computers start up at the same time every day.
  • Networks with thin clients should be carefully designed with sufficient capacity and an appropriate topology. Switched networking (rather than shared media) is recommended, and to avoid problems caused by too much broadcast traffic.

Peer-to-Peer Traffic Flow

  • Flow is usually bidirectional and symmetric.
  • Communicating entities transmit approximately equal amounts of information.
  • There is no hierarchy.
  • Each device is considered as important as each other device, and no device stores substantially more data than any other device.
  • There are many flows in both directions.
  • Recently peer-to-peer applications for downloading music, videos, and software have gained popularity.
  • Most enterprises and many university networks disallow this type of peer-to-peer traffic for two reasons. First, it can cause an inordinate amount of traffic, and, second, the published material is often copyrighted by someone other than the person publishing it.

Server/Server Traffic Flow

  • Server/server traffic includes transmissions between servers and transmissions between servers and management applications.
  • Servers talk to other servers to implement directory services, to cache heavily used data, to mirror data for load balancing and redundancy, to back up data, and to broadcast service availability
  • The flow is generally bidirectional.
  • The symmetry of the flow depends on the application. With most server/server applications, the flow is symmetrical, but in some cases there is a hierarchy of servers, with some servers sending and storing more data than others.

Distributed Computing Traffic Flow

  • Distributed computing refers to applications that require multiple computing nodes working together to complete a job.
  • Some complex modeling and rendering tasks cannot be accomplished in a reasonable timeframe unless multiple computers process data and run algorithms simultaneously.
  • The visual effects for movies are often developed in a distributed-computing environment.
  • Data travels between a task manager and computing nodes and between computing nodes.
  • McCabe distinguishes between tightly coupled and loosely coupled computing nodes. Nodes that are tightly coupled transfer information to each other frequently. Nodes that are loosely coupled transfer little or no information.

Traffic Flow in Voice over IP Networks

  • The most important concept to understand when considering traffic flow in VoIP networks is that there are multiple flows.
  • The flow associated with transmitting the audio voice is separate from the flows associated with call setup and teardown.
  • The flow for transmitting the digital voice is essentially peer-to-peer, between two phones or PCs running software, such as Microsoft’s NetMeeting or Cisco’s SoftPhone.
  • Call setup and teardown, on the other hand, could be characterized as a client/server flow because a phone needs to talk to a more complicated device, such as a server or traditional phone switch, that understands phone numbers, addresses, capabilities negotiation, and so on.
  • These protocols generally fit the client/server paradigm, but they also use distributed and peer-to-peer processing, and have some terminal/host behavior.

Documenting Traffic Flow for New and Existing Network Applications

  • Characterize the flow type for each application and list the user communities and data stores that are associated with applications.
  • You can use Table 4-4 to enhance the Network Applications charts already discussed in Chapters 1 and 2.
  • When identifying the type of traffic flow for an application, select one of the well-known types:
    • Terminal/host
    • Client/server
    • Peer-to-peer
    • Server/server
    • Distributed computing

Characterizing Traffic Load

  • To select appropriate topologies and technologies to meet a customer’s goals, it is important to characterize traffic load with traffic flow.
  • Characterizing traffic load can help you design networks with sufficient capacity for local usage and internetwork flows.
  • The goal is simply to avoid a design that has any critical bottlenecks.
  • To avoid bottlenecks, you can research application usage patterns, idle times between packets and sessions, frame sizes, and other traffic behavioral patterns for application and system protocols.
  • Another approach to avoiding bottlenecks is simply to throw large amounts of bandwidth at the problem. A strict interpretation of systems analysis principles wouldn’t approve of such an approach, but bandwidth is cheap these days. LAN bandwidth is extremely cheap.

Calculating Theoretical Traffic Load

  • Traffic load (sometimes called offered load) is the sum of all the data network nodes have ready to send at a particular time.
  • A general goal for most network designs is that the network capacity should be more than adequate to handle the traffic load.
  • The challenge is to determine if the capacity proposed for a new network design is sufficient to handle the potential load.
  • In general, to calculate whether capacity is sufficient, only a few parameters are necessary:
    • The number of stations
    • The average time that a station is idle between sending frames
    • The time required to transmit a message once medium access is gained
  • By studying idle times and frame sizes with a protocol analyzer, and estimating the number of stations, you can determine if the proposed capacity is sufficient.
  • After you have identified the approximate traffic load for an application flow, you can estimate total load for an application by multiplying the load for the flow by the number of devices that use the application.
  • The research you do on the size of user communities and the number of data stores (servers) can help you calculate an approximate aggregated bandwidth requirement for each application and fill in the “Approximate Bandwidth Requirement for the Application” column in Table 4-4.

Documenting Application-Usage Patterns

  • The first step in documenting application-usage patterns is to identify user communities, the number of users in the communities, and the applications the users employ
  • In addition to identifying the total number of users for each application, you should also document the following information:
    • The frequency of application sessions (number of sessions per day, week, month or whatever time period is appropriate)
    • The length of an average application session
    • The number of simultaneous users of an application
  • If it is not practical to research these details, you can make some assumptions:
    • Assume that the number of users of an application equals the number of simultaneous users.
    • Assume that all applications are used all the time so that your bandwidth calculation is a worst-case (peak) estimate.
    • Assume that each user opens just one session and that a session lasts all day until the user shuts down the application at the end of the day.

Refining Estimates of Traffic Load Caused by Applications

  • You need to research the size of data objects sent by applications,
  • the overhead caused by protocol layers, and
  • any additional load caused by application initialization.
  • (Some applications send much more traffic during initialization than during steady-state operation.)

Estimating Traffic Overhead for Various Protocols

  • To completely characterize application behavior, you should investigate which protocols an application uses.
  • You can calculate traffic load more precisely by adding the size of protocol headers to the size of data objects

Estimating Traffic Load Caused by Workstation and Session Initialization

  • Workstation initialization can cause a load on networks due to the number of packets and, in some cases, the number of broadcast packets.
  • This is becoming less of a problem as network bandwidth has become so inexpensive, and extremely fast CPUs in workstations are readily available so that broadcast processing isn’t a concern
  • The tables in Appendix A show typical workstation behavior for the following protocols:
    • Novell NetWare
    • AppleTalk
    • TCP/IP
    • TCP/IP with DHCP
    • NetBIOS (NetBEUI)
    • NetBIOS with a Windows Internet Naming Service (WINS) server
    • SNA

Boot-Time Traffic for Older Protocols

  • Many universities, schools, governments, and nonprofit organizations still use older protocols.
  • Also, some companies that have theoretically standardized on TCP/IP are often surprised when a network engineer studies their network traffic.
  • A lot of equipment that is supposedly running only TCP/IP tends to still send out traffic for other protocols.
  • Printers are notorious for still sending Novell NetWare traffic in many different formats.

Estimating Traffic Load Caused by Routing Protocols

  • To characterize network traffic caused by routing protocols, Table 4-7 shows the amount of bandwidth used by legacy distance-vector routing protocols.
  • Estimating traffic load caused by legacy routing protocols is especially important in a topology that includes many networks on one side of a slow WAN link.
  • A router sending a large distance-vector routing table every minute can use a significant percentage of WAN bandwidth
  • Newer routing protocols, such as OSPF and EIGRP, use very little bandwidth. In the case of OSPF, your main concern should be the amount of bandwidth consumed by the database synchronization packets that routers send every 30 minutes.
  • By subdividing an OSPF network into areas and using route summarization, this traffic can be minimized
  • EIGRP also sends Hello packets, but more frequently than OSPF (every 5 seconds). On the other hand, EIGRP doesn’t send any periodic route updates or database synchronization packets. It only sends route updates when there are changes.

Characterizing Traffic Behavior

  • To select appropriate network design solutions, you need to understand protocol and application behavior in addition to traffic flows and load.
  • For example, to select appropriate LAN topologies, you need to investigate the level of broadcast traffic on the LANs.
  • To provision adequate capacity for LANs and WANs, you need to check for extra bandwidth utilization caused by protocol inefficiencies and nonoptimal frame sizes or retransmission timers.

Broadcast/Multicast Behavior

  • A broadcast frame is a frame that goes to all network stations on a LAN.
  • A multicast frame is a frame that goes to a subset of stations
  • Layer 2 internetworking devices, such as switches and bridges, forward broadcast and multicast frames out all ports.
  • The forwarding of broadcast and multicast frames can be a scalability problem for large flat (switched or bridged) networks.
  • A router does not forward broadcasts or multicasts. All devices on one side of a router are considered part of a single broadcast domain.
  • In addition to including routers in a network design to decrease broadcast forwarding, you can also limit the size of a broadcast domain by implementing virtual LANs (VLANs).
  • Although a VLAN can span many switches, broadcast traffic within a VLAN is not transmitted outside the VLAN.
  • Too many broadcast frames can overwhelm end stations, switches, and routers. Another possible cause of heavy broadcast traffic is intermittent broadcast storms caused by misconfigured or misbehaving network stations
  • Broadcast traffic is necessary and unavoidable.
  • Routing and switching protocols use broadcasts and multicasts to share information about the internetwork topology.
  • Servers send broadcasts and multicasts to advertise their services.
  • Desktop protocols such as AppleTalk, NetWare, NetBIOS, and TCP/IP require broadcast and multicast frames to find services and check for uniqueness of addresses and names.

Network Efficiency

  • Efficiency refers to whether applications and protocols use bandwidth effectively.
  • Efficiency is affected by frame size, the interaction of protocols used by an application, windowing and flow control, and error-recovery mechanisms.

Frame Size

  • Using a frame size that is the maximum supported for the medium in use has a positive impact on network performance for bulk applications.
  • For file-transfer applications, in particular, you should use the largest possible maximum transmission unit (MTU).
  • Depending on the protocol stacks that your customer will use in the new network design, the MTU can be configured for some applications.
  • In an IP environment, you should avoid increasing the MTU to larger than the maximum supported for the media traversed by the frames, to avoid fragmentation and reassembly of frames
  • Modern operating systems support MTU discovery. With MTU discovery, the software can dynamically discover and use the largest frame size that will traverse the network without requiring fragmentation

Protocol Interaction

  • Inefficiency is also caused by the interaction of protocols and the misconfiguration of acknowledgment timers and other parameters
  • Reliability features, such as timeouts, acknowledgments, and polling, are implemented in three layers: LLC, NetBIOS, and SMB.
  • (Token Ring also sets the address recognized and frame copied bits in the Frame Status (FS) field, so it could be argued that reliability is implemented in four layers.)
  • Due to overhead and acknowledgments, 1407 bytes were required to transfer 1028 bytes of user data (counting the final LLC and NetBIOS acknowledgments from Joe’s station). Approximately 27 percent of network traffic is overhead.
  • To improve efficiency on this network, the LLC and NetBIOS timers could be increased. If the timers are increased, LLC and NetBIOS can include their acknowledgments in the SMB response.
  • Also, if possible, the network administrators should determine why the server sets the poll bit at the LLC layer and why the server took so long to return the SMB data

Windowing and Flow Control

  • To really understand network traffic, you need to understand windowing and flow control.
  • A TCP/IP device, for example, sends segments (packets) of data in quick sequence, without waiting for an acknowledgment, until its send window has been exhausted.
  • A station’s send window is based on the recipient’s receive window.
  • The recipient states in every TCP packet how much data it is ready to receive. This total can vary from a few bytes up to 65,535 bytes.
  • The recipient’s receive window is based on how much memory the receiver has and how quickly it can process received data.
  • You can optimize network efficiency by increasing memory and CPU power on end stations, which can result in a larger receive window.

NOTE

  • Theoretically, the optimal window size is the bandwidth of a link multiplied by delay on the link.
  • To maximize throughput and use bandwidth efficiently, the send window should be large enough for the sender to completely fill the bandwidth pipe with data before stopping transmission and waiting for an acknowledgment.

Error-Recovery Mechanisms

  • Poorly designed error-recovery mechanisms can waste bandwidth.
  • For example, if a protocol retransmits data very quickly without waiting a long enough time to receive an acknowledgment, this can cause performance degradation for the rest of the network due to the bandwidth used. Acknowledgments at Layer 2 waste bandwidth as seen earlier in the “Protocol Interaction” section.
  • Connectionless protocols usually do not implement error recovery. Most data link layer and network layer protocols are connectionless. Some transport layer protocols, such as UDP, are connectionless.
  • Error-recovery mechanisms for connection-oriented protocols vary. TCP implements an adaptive retransmission algorithm, which means that the rate of retransmissions slows when the network is congested, which optimizes the use of bandwidth.

Characterizing Quality of Service Requirements

  • You need to also characterize the QoS requirements for applications.
  • Knowing the load (bandwidth) requirement for an application is not sufficient. You also need to know if the requirement is flexible or inflexible. Some applications continue to work (although slowly) when bandwidth is not sufficient.
  • Other applications, such as voice and video applications, are rendered useless if a certain level of bandwidth is not available
  • Voice is also inflexible with regards to delay. Voice is also sensitive to packet loss, which results in voice clipping and skips.
  • Without proper network-wide QoS configuration, loss can occur because of congested links and poor packet buffer and queue management on routers.

ATM Quality of Service Specifications

  • In their document “Traffic Management Specification Version 4.1,” the ATM Forum does an excellent job of categorizing the types of service that a network can offer to support different sorts of applicationsThe ATM Forum defines six service categories, each of which is described in more detail later in this section:
    • Constant bit rate (CBR)
    • Realtime variable bit rate (rt-VBR)
    • Non-realtime variable bit rate (nrt-VBR)
    • Unspecified bit rate (UBR)
    • Available bit rate (ABR)
    • Guaranteed frame rate (GFR)
  • For each service category, the ATM Forum specifies a set of parameters to describe both the traffic presented to the network and the QoS required of the network.
  • The ATM Forum also defines traffic control mechanisms that the network can use to meet QoS objectives.
  • The network can implement such mechanisms as connection admission control and resource allocation differently for each service category.
  • Service categories are distinguished as being either realtime or non-realtime.
  • CBR and rt-VBR are realtime service categories.
  • Realtime applications, such as voice and video applications, require tightly constrained delay and delay variation.
  • Non-realtime applications, such as client/server and terminal/host data applications, do not require tightly constrained delay and delay variation. Nrt-VBR, UBR, ABR, and GFR are non-realtime service categories.

IETF Integrated Services Working Group Quality of Service Specifications

  • In an IP environment, you can use the work that the IETF Integrated Services working group is doing on QoS requirements. In RFC 2205, the working group describes the Resource Reservation Protocol (RSVP).
  • In RFC 2208, the working group provides information on the applicability of RSVP and some guidelines for deploying it. RFCs 2209 through 2216 are also related to supporting QoS on the Internet and intranets.
  • RSVP is a setup protocol used by a host to request specific qualities of service from the network for particular application flows. RSVP is also used by routers to deliver QoS requests to other routers (or other types of nodes) along the path(s) of a flow.
  • RSVP requests generally result in resources being reserved in each node along the path.
  • RSVP implements QoS for a particular data flow using mechanisms collectively called traffic control. These mechanisms include the following:
    • A packet classifier that determines the QoS class (and perhaps the route) for each packet
    • An admission control function that determines whether the node has sufficient available resources to supply the requested QoS
    • A packet scheduler that determines when particular packets are forwarded to meet QoS requirements of a flow
  • RSVP works in conjunction with mechanisms at end systems to request services.
  • To ensure that QoS conditions are met, RSVP clients provide the intermediate network nodes an estimate of the data traffic they will generate.
  • This is done with a traffic specification (TSpec) and a service-request specification (RSpec), as described in RFC 2216.
  • The RSVP protocol provides a general facility for reserving resources. RSVP does not define the different types of services that applications can request.
  • The Integrated Services working group describes services in RFCs 2210 through 2216.

Controlled-Load Service

  • Controlled-load service is defined in RFC 2211 and provides a client data flow with a QoS closely approximating the QoS that same flow would receive on an unloaded network.
  • Admission control is applied to requests to ensure that the requested service is received even when the network is overloaded.
  • The controlled-load service is intended for applications that are highly sensitive to overloaded conditions, such as real-time applications.
  • These applications work well on unloaded networks, but degrade quickly on overloaded networks.
  • Assuming the network is functioning correctly, an application requesting controlled-load service can assume the following:
    • A very high percentage of transmitted packets will be successfully delivered by the network to the receiving end nodes.
    • (The percentage of packets not successfully delivered must closely approximate the basic packet-error rate of the transmission medium.)
    • The transit delay experienced by a very high percentage of the delivered packets will not greatly exceed the minimum transmit delay experienced by any successfully delivered packet.
    • (This minimum transit delay includes speed-of-light delay plus the fixed processing time in routers and other communications devices along the path.)
  • The controlled-load service does not accept or make use of specific target values for parameters such as delay or loss.

Guaranteed Service

  • RFC 2212 describes the network node behavior required to deliver a service called guaranteed service that guarantees both bandwidth and delay characteristics.
  • Guaranteed service provides firm (mathematically probable) bounds on end-to-end packet-queuing delays.
  • It does not attempt to minimize jitter and is not concerned about fixed delay, such as transmission delay.
  • (Fixed delay is a property of the chosen path, which is determined by the setup mechanism, such as RSVP.)
  • Guaranteed service guarantees that packets will arrive within the guaranteed delivery time and will not be discarded due to queue overflows, provided the flow’s traffic conforms to its TSpec.
  • A series of network nodes that implement RFC 2212 ensure a level of bandwidth that, when used by a regulated flow, produces a delay-bounded service with no queuing loss (assuming no failure of network components or changes in routing during the life of the flow).
  • Guaranteed service is intended for applications that need a guarantee that a packet will arrive no later than a certain time after it was transmitted by its source.
  • For example, some audio and video playback applications are intolerant of a packet arriving after its expected playback time.
  • Applications that have real-time requirements can also use guaranteed service.
  • of the negative effect it could have on other applications.

IETF Differentiated Services Working Group Quality of Service Specifications

  • The IETF also has a Differentiated Services working group that works on QoS-related specifications.
  • RFC 2475, “An Architecture for Differentiated Services,” defines an architecture for implementing scalable service differentiation in an internetwork or the Internet.
  • Although the integrated services (RSVP) model, described in the previous section, offers finer granularity, it is less scalable than the differentiated service model.
  • The integrated services model allows sources and receivers to exchange signaling messages that establish packet classification and forwarding state on each router along the path between

Grade of Service Requirements for Voice Applications

  • In a voice network, in addition to there being a need for QoS to ensure low and nonvariable delay and low packet loss, there is also a need for what voice experts call a high grade of service (GoS).
  • GoS refers to the fraction of calls that are successfully completed in a timely fashion. Call completion rate (CCR) is another name for the requirement.
  • A network must have high availability to support a high GoS.
  • In an unreliable network, GoS is adversely affected when call setup and teardown messages are lost.
  • A lost signal for call setup can result in an unsuccessful call attempt. A lost signal for call teardown can cause voice resources to be unavailable for other calls.
  • Call setup and teardown messages aren’t as delay sensitive as the audio sent during the actual call, so retransmission of these messages is permitted, but should generally be avoided to avoid impacting users.
  • To achieve high GoS, you should follow the recommendations that will be presented in subsequent chapters to use reliable components (cables, patch panels, switches, routers, and so on), and to build redundancy and failover into the network using such techniques as dynamic routing, the Spanning Tree Protocol (STP) for switched networks, Hot Standby Router Protocol (HSRP), and so on

Documenting QoS Requirements

  • You should work with your customer to classify each network application in a service category
  • If your customer has applications that can be characterized as needing controlled-load or guaranteed service, you can use those terms when filling in the “QoS Requirements” column.
  • If your customer plans to use ATM, you can use the ATM Forum’s terminology for service categories.
  • Even if your customer does not plan to use ATM or IETF QoS, you can still use the ATM Forum or Integrated Services working group terminology.
  • Another alternative is to simply use the terms inflexible and flexible.
  • Inflexible is a generic word to describe any application that has specific requirements for constant bandwidth, delay, delay variation, accuracy, and throughput.
  • Flexible, is a generic term for applications that simply expect the network to make a best effort to meet requirements.
  • Many nonmultimedia applications have flexible QoS requirements.
  • For voice applications, you should make more than one entry in Table 4-4 due to the different requirements for the call control flow and the audio stream.

Network Traffic Checklist

You can use the following Network Traffic checklist to determine if you have completed all the steps for characterizing network traffic:

·I have identified major traffic sources and stores and documented traffic flow between them.

·I have categorized the traffic flow for each application as being terminal/host, client/server, peer-to-peer, server/server, or distributed computing.

·I have estimated the bandwidth requirements for each application.

·I have estimated bandwidth requirements for routing protocols.

·I have characterized network traffic in terms of broadcast/multicast rates, efficiency, frame sizes, windowing and flow control, and error-recovery mechanisms.

·I have categorized the QoS requirements of each application.

·I have discussed the challenges associated with implementing end-to-end QoS and the need for devices across the network to do their part in implementing QoS strategies.