QoS Functions Backbone Networks

Circuit-switched core and the 3GPP QoS concept
Dr.MohitBansal Profile Pic
Dr.MohitBansal,Canada,Teacher
Published Date:26-10-2017
Your Website URL(Optional)
Comment
QoS Functions in Core and Backbone Networks This chapter describes the QoS and traffic management mechanisms that may be implemented in the core and backbone parts of the network. Packet core elements play a key role in QoS management by, for instance, checking mobile station (MS) requested QoS attributes against the subscriber’s profile and by performing admission control, authorisation(R5)ortranslation procedures.The UMTS bearerservicemodel defined in 3GPP is a framework upon which related QoS mechanisms may be imple- mented.BecauseQoSisanend-to-endissue,suitableQoSmechanismsarealsoneededin the backbone. Note that in case most of the backbone traffic is circuit-switched (CS) voice,QoSdifferentiationcanonlybringminorgainsandthusslightoverprovisioningis neededtoensuresatisfactoryend-userexperience.Astheportionofpacket-switched(PS) data increases in the backbone, capacity gains brought by QoS differentiation also increase. Sections 6.1–6.3 respectively address the CS core, PS core and backbone domains. 6.1 Circuit-switchedQoS ThissectionintroducesQoSmechanismsforCStraffic.Asexplainedbelow,theresources forCScallshavetobeensurednotonlyinradioandcoreCSnetworkelementsbutalsoin core transport elements (or backbone). 6.1.1 Architectureofthecircuit-switchedcorenetwork In3GPPR4thesplitcircuit-switchedcorearchitecturewasdefined.InanR4-compliant mobile service switching centre (MSC) server system, handling of the user plane and the control plane are separated. The traditional MSC is split into an MSC server thatController site Core site Core site MGW RNC SGSN HLR GGSN MSC MGW server MGW SDH (ATM) IP/MPLS (or ATM) ISP Dedicated transport channel Common transport prioritization of networks resource reservation for conversational traffic in routers and conversational class traffic switches Narrow links, costly transport, Low-cost bandwidth, fibre-based mix of radio links, copper and fibre networks Mobile network Mobile core controllers treat elements treat traffic according traffic according to UMTS traffic to UMTS traffic class class Figure6.1 Transport and QoS in an R4 network. takes care of call control and signalling and a media gateway that provides user plane functionality. The key interfaces – Iu-cs, Nb and Mc – can be implemented using IP or asynchronoustransfermode(ATM)transport.Thecorenetworkiscommontoboththe UMTS terrestrial radio access network (UTRAN) and GSM/EDGE radio access network (GERAN). Note that the 3GPP architecture totally ignores the distances, physical locations and grouping of network elements. For QoS these are however essential. Core network elements typically reside in a rather small number (typically between 3 and 10) of centralised core sites. Radio network controllers reside either in the core sites or in distributed controller sites. These controller sites often also house GSM base station controllers and media gateways. In the radio access network (RAN) dedicated transport channels are provided for circuit-switched traffic. Between controller sites and core sites a common packet trans- port network for all traffic is typically used. In this network, circuit-switched traffic is mixed with packet-switched mobile traffic and conventional data traffic from other sources and the QoS schemes of the used transport technology apply. The transport and QoS in R4 networks is outlined in Figure 6.1. 6.1.2 Circuit-switchedservices The most important circuit-switched service is speech. Voice calls dominate in mobile networks both in terms of traffic volume as well as revenue generators. In addition to speech, circuit-switched data services are available. Speechservicesarenormalvoiceconnections.Callcasesinclude3G–3Gcallsaswellas calls to the GSM network, public-switched telephone network (PSTN) and IP multi-QoS Functions in Core and Backbone Networks 211 media.Dataservicescanbedividedintotransparentandnon-transparent,dependingon the requirements of the application used. Transparent services are used for delay- sensitive applications requiring a synchronous bearer service. Non-transparent services are used for applications that support an asynchronous bearer service. The benefit of usingthenon-transparentserviceisthepossibilityofretransmissionsoveranerror-prone air interface. 6.1.3 Factorsaffectingthequalityofcircuit-switchedservices The quality of service perceived by the user (or QoE) is partly determined by general factorssuchascallsetuptimeandcallsuccessrateandpartlybytheactualconnection– that is, speech or video quality, data download time and latency. During the call, handovers may cause interruptions or modifications in the service. These also affect user experience. Forvoiceservicestheimpactofpacketloss,delayanddelayvariationisdependenton thecodecsused.Foradaptivemulti-rate(AMR)codecs,frameerasureratesof0.5–1.0% do not seem to cause significant quality degradations 1. Probably the most important and from the QoS perspective the most challenging circuit-switcheddataserviceisvideotelephony.Itrequiresaconstantbitrate,smalldelay variationandacontinuousbitstream.So,asynchronoustransparentdatabearerisused. In a two-way video conversation the delay caused by the mobile network is an issue. 6.1.4 Circuit-switchedcoreandthe3GPPQoSconcept The3GPPqualityofserviceconceptandarchitecturespecificationTS23.1072doesnot explicitly specify the QoS mechanisms for the circuit-switchedcore. Focus in the 3GPP specificationisontheTCusedinthepacketcore.TheseTCs(conversational,streaming, interactive,background)areratherirrelevantasvoiceandcircuit-switcheddataservices are allconversationalinnature. The only exceptionisthe short messageservice(SMS). SMS messages are carried in the signalling network among 3GPP-defined control protocols. The latest major QoS-related changes for the circuit-switched core were first introduced in 3GPP R4. They included the MSC server–media gateway concept and IP transport of core network protocols. TS23.107specifies thattheUMTS packetcorenetworkshallsupportdifferentback- bonebearerservicesforanumberofQoSneeds.TheoperatorcanchoosewhetherIPor ATMQoScapabilitiesareused.IncaseofanIPbackbone,differentiatedservicesshallbe used. The paragraphs above may seem confusing. 3GPP QoS concept and architecture specifications do not provide any QoS differentiation for the circuit-switched core. The network operator decides the QoS treatment of different logical interfaces. This isratherfundamentalasthenetworkoperatorsalsodecidehowdifferentuserplaneand control plane traffic types are prioritised. Should voice, messaging, signalling or opera- tionandmaintenancepacketsbedroppedfirst?Differentpoliciesapply.Someguidance onthetreatmentofthedifferenttypesoftrafficinanIPbackboneisgiveninSection6.3.212 QoS and QoE Management in UMTS Cellular Systems 6.1.5 QoSmechanismsinthecircuit-switchedcore In addition to the 3GPP QoS concept, the service quality experienced by end-users is affectedbythewayinwhichcircuit-switchedcorecallcontrolworks.Whenacallissetup thenetworkreservesbothradioandcorenetworkresourcesforthecall.Inatraditional time division multiplexing (TDM) based core this includes the circuits in the actual switching equipment and the time slots on the transmission links. In an MSC server system the reserved resources are ports in the media gateways. ThedifferencebetweenthetraditionalTDMenvironmentandtheMSCserversystem is that in the TDM environment resource reservation is done for a circuit between the pointwherethe callentersthenetworkandthepointwhereitleaves,whereasthe MSC server system only reserves media gateway resources at ingress and egress. The avail- abilityoftransportcapacitybetweenthesepointshastobeensuredbysomeothermeans. Reading the above one may think that the TDM solution is more complete and thus better than the MSC server solution. Unfortunately, the TDM network has to be constructed on a 2-Mb/s or 64-kb/s basis. Capacity allocations are rigid and to avoid the overwhelmingcomplexityofafullmeshevensmaller networkshavetobebuiltina hierarchical architecture. So, network operators prefer MSC server-based solutions because of the simplified network architecture, lower capital expenditure and smaller operational cost. The major QoS question in MSC server networks is how to make sure that the transport capacity between the media gateways is available. A second important question is to guarantee the availability of signalling links. AtthisstagesomemaydespairanddecidethatasignalledATMnetworkwitharigid capacity allocation is needed between mobile core network elements. This is supported by 3GPP specifications. When correctly operated it most likely solves the problems at hand. Unfortunately,it wouldinmanycases mean the construction ofanewdedicated network parallel to the IP network anyway, needed for both consumer and corporate data services. The need for major investments and the task of building lots of compe- tencesintechnologythatdoesnotseemtohaveafuturemightbereasonenoughtotakea second look at the QoS technologies available for IP-based networks. First,itmaybeagoodideatothinkabouttheextentoftheissues.RememberthatQoS mechanismsdonotcreateadditionalbandwidth.Theyjusthelpinselectingwhichtraffic todroportodelayinanoverloadsituation.Losttrafficequalslostrevenue.Aradio accessnetworkandmobilecorearemuchmoreexpensivetobuildandtooperatethana standardIPorATMbackbonebetweenmajorsites.Also,thenetworkstructuretypically limitstheamountofthebackbonetrafficasacorenetworksitetypicallyservesadistinct geographical region. Mobile subscribers tend to call people close to them, the kids at home or the boss in the office. This traffic normally does not need backbone transport. Moderateoverprovisioningofthebackboneisrecommendedasitalsomakesiteasier to adapt to changing traffic patterns. There is no point in building a network in which preciousmobile networkelements cannot be utilised to the full extent becauseof insuf- ficient backbone transport capacity. The QoS concerns that remain in a conservatively designed packet-based circuit- switchedcorearerelatedtoexceptionalsituations.Whathappensifthecalldistribution suddenlychangesalotorifsomeofthenetworkresourcesbecomeunavailable?IntheseQoS Functions in Core and Backbone Networks 213 cases a traditional TDM-based circuit-switched core will reject the calls of most sub- scribers and only accept calls from prioritised subscribers. This is done using the allo- cationretentionpriority(ARP),anattributedefinedinthehomelocationregister(HLR) subscription.Thesemechanismsalsoworkinapacket-basedcircuit-switchedcore,with the only difference that call control has no direct means to determine to what extent transportresourcesareavailable.So,insteadofabusytonethesubscribermaygetacall with unacceptable voice quality or no voice at all. QoSmechanismsinthebackbonehelpinavoidingsituationswherethepacketlossrate for circuit-switched connections or related signalling becomes unacceptable. Conversa- tional services have the highest priority in the differentiated services (DiffServ) QoS schemeused.OperationoftheDiffServschemeisdescribedinmoredetailinSection6.3. 6.2 Packet-switchedcoreQoS This section introduces a QoS mechanism for the PS core domain. More precisely, it describes PDP context QoS parameter control and provides examples of the traffic management features in the GGSN and SGSN. 6.2.1 Sessionmanagement Thesessionmanagementfunctionalityallowsoperatorstocontrolflexiblyhowsessions aremappedontodifferentQoSprofiles.Themainelementsinvolvedinthatprocessare the SGSN, HLR and GGSN (also sometimes called the ‘intelligent edge’). The PDP context activation procedure was described in detail in Chapter 3. 6.2.1.1 Transmission-modeselectionin2G-SGSN The 2G-SGSN selects the transmission mode for the PDP context as part of session management. In 2G-SGSN the values of the serving data unit (SDU) error ratio and residualbiterrorratioaffectthetransmissionmodeusedindifferentlayers2.Table6.1 shows the transmission mode specified in 3GPP R99 and later releases with a different SDU error ratio and residual bit error ratio combination. IftheMSisa3GPPR97/98one,thetransmissionmodeisselecteddirectlyaccording to the reliability class value. 6.2.1.2 Mappingof3GPPR97/98QoSattributesonto3GPPR99attributes Becausenetworkelementsandmobilesmaysupportvariousstandardreleases,3GPPhas specifiedhowQoSprofilesfromdifferentreleasesshouldmapeachother(seeChapter3 for further details). In 3GPP R97/98 real time applications were not considered and therefore some of the current QoS parameters are not supported. Attribute mapping from R97/98 to R99 is needed in the following cases 2: . Handover of PDP context from R97/98 SGSN to R99 SGSN. . When the GGSN is R97/98 and SGSN is R99. In such a case, the activation PDP214 QoS and QoE Management in UMTS Cellular Systems Table6.1 Selection of transmission mode according to R99 QoS attributes. SDU error ratio Residual bit error ratio Resulting 2G-SGSN behaviour 5 10 N/A GTP buffer used, acknowledged LLC mode, LLC data-protected 5 4 10 x 510 protocol N/A GTP buffer not used, unacknowledged LLC mode, LLC data-protected 4 4 510 protected 210 GTP buffer not used, unacknowledged LLC mode, LLC data-protected 4 4 510 unprotected 210 GTP buffer not used, unacknowledged LLC mode, LLC data-unprotected context response from the GGSN accompanies the R97/98 attributes and the SGSN maps them onto R99. . When the MS is an R99 one, but requests R97/98 attributes. Attribute mapping from R99 to R97/98 is needed in the following cases: . PDP context is handed over from GPRS/UMTS R99 to GPRS R97/98. . When the GGSN is R97/98 and SGSN is R99. In such a case, the activation PDP context request from the SGSN shall be responsible for mapping the R99 attributes onto R97/98. .AR99HLRmayneedtomapthestoredsubscriberQoSontotheR97/98QoS attributes that are going to be sent to the R97/98 and R99 SGSN. If the MS requests R99 QoS attributes, even if some network element (other than the SGSN) replies with R97/98 QoS attributes, the response to the MS should include R99 QoSattributes.Likewise,iftheMSrequestsR97/98QoSattributes,theresponseshould include R97/98 QoS attributes. 6.2.1.3 RealtimePDPcontextbasedadmissioncontrol Each network element may support a configurable, maximum, real time bandwidth dedicated for all real time PDP contexts. This bandwidth may be shared by two TCs, having different priorities. It may also be useful to have as a configurable parameter themaximumbandwidthperTC.Incertaincases,themaximumoverallbandwidthand the TC maximum bandwidth could be equal – for example, Max. overall band- width¼ 1Gb/s and Max. TC streaming bandwidth¼ 1Gb/s. Whenever a new real time PDP context request arrives at the network element, the element will check whether there is any bandwidth available from the combination of the remainingQoS Functions in Core and Backbone Networks 215 overall bandwidth and TC bandwidth. If not, a downgrade PDP context QoS profile proceduremaytakeplace,anditisuptotheMStoacceptorrejectthisnewQoSprofile. If there is enough bandwidth, the PDP context request will be accepted and the given GBR is taken away from the remaining overall and TC bandwidth. In addition to using the above mechanism, it is recommended to perform admission controlbasedagainonnetworkelementutilisation.Forinstance,thecentralprocessing unit(CPU)loadpercentage,TCandARPcanbeusedasinputfortheadmissioncontrol decision;asanexample,anetworkelementcouldbeconfiguredsothatiftheCPUloadis above60%,nonewstreamingPDPcontextswithARP3areaccepted.IftheCPUloadis above80%,nonewstreamingPDPcontextwithARP2orbelowandnonewconversa- tional PDP context with ARP3 are accepted. If the CPU load is above 90%, no new streamingPDPcontextandnonewconversationalPDPcontextwithARP2orbeloware accepted.Finally,conversational PDPcontexts withARP1mayalwaysbeaccepted.In thiswaythehighestpriorityusersandapplicationsareservedinhighlyloadedsituations as well. 6.2.2 Intelligentedgeconcept(changeforQoScontrolinpacketcore) TheintelligentedgeconceptwasintroducedtofurtherimproveQoSandchargingcontrol basedontheactualservicesbeingused.Thisconceptisdescribedinthepresentsection. 6.2.2.1 Service-basedQoSdifferentiation Broadband Internet access and mobile Internet access exhibit the following main differences: . Thenetworkinfrastructureischeaperinbroadbandnetworksthaninmobilenetworks. Basestations(BTS),BSCandRNCareexpensivenetworkelementsthattheoperator will typically try to utilise optimally. . In mobile networks, many users share the most commonly congested link (air inter- face),whereasinbroadbandaccessthemostcongestedlinkisusuallytheprivatelink. These reasons among others are the drivers for advanced mobile network resource optimisation.Asamostcongestedlinkissharedandmobilesubscribersuseapplications with various QoS requirements, traffic differentiation is applied. From the QoS viewpoint, the current 3GPP systems and/or specifications have the following limitations or constraints: . Some MSs support only one PDP context at a time. . Some MSs support only a limited number of APNs. . Most currently available MSs do not request any QoS. . Potential misuse of RT PDP contexts because of the open source APIs in the MS. . Common subscriber QoS profile per APN for all access types. In addition to these, some network element vendors may not support all parameters or TC combinations specified in 3GPP, which may cause extra signalling on the network.216 QoS and QoE Management in UMTS Cellular Systems Table6.2 Relevant QoS parameters for traffic differentiation. Traffic class ARP THP MBR GBR Conversational Yes (1, 2, 3) No Yes Yes Streaming Yes (1, 2, 3) No Yes Yes Interactive Yes (1, 2, 3) Yes (1, 2, 3) Yes No Background Yes (1, 2, 3) No Yes No Finally,anotherissueishowtoguaranteeQoSwhenusersconnectedtoaGPRSaccess network move to UMTS coverage (and vice versa). The most relevant QoS parameters used by network elements for prioritisation, scheduling and queuing are shown in Table 6.2. Therearethreemainapproachestotrafficdifferentiation:subscriber-baseddifferentia- tion, service-based differentiation and a combination of these two. Subscriber-baseddifferentiationmaybedoneusingtheARPthatisstoredintheHLR for different subscribers. For instance, for the same APN a different ARP is given for threetypesofsubscribersdependingontheirimportanceorchargingtype:VIP,goldand low-priorityusers.Thisapproachalsohasdrawbackssinceradionetworkresourcesmay notbeusedoptimally.Also,asthenumberofconcurrentdemandingservicesincreases, gold and low-priority users will experience worse service quality. Service differentiation as opposed to subscriber differentiation may require more intelligence in packet core elements, especially at the edge ofthe network. As explained above, 3GPP standards do not take into account potential terminal or equipment limitations and, therefore, alternative (i.e., non-standard) solutions are needed to deal withthem.Also,becausethemajorityofcurrentlyavailableMSsdonotsupportsimul- taneous PDP contexts and support only a small number of APNs, an HLR-based solution in which a unique APN is assigned for each service type is not suitable. InUMTSnetworks,QoSdifferentiationinthecoreisbasedonthePDPcontextQoS parametersthataremappedontothetransportQoS(seeSection6.2.4formoredetails). Inthisrespect,theGGSN–beingtheentrypointofcellularnetworks–playsanessential role.TheproposedideaforservicedifferentiationisthattheGGSNwillidentifywhichof the subscriber’s services is in use and will select the adequate QoS accordingly. One possible way is by looking inside the IP flow using a Layer 4/7 lookup mechanism. After the IP flow is matched to the right service, the corresponding PDP context QoS profile is compared with the maximum QoS profile for that service. If the PDP context QoSprofileishigherthanwhattheservicerequires,adowngradePDPcontextprocedure is initiated. If the service requires a higher QoS profile than the PDP context currently has, an upgrade PDP context procedure is initiated (the upgraded QoS profile cannot exceed the negotiated QoS that results from the combination of the MS-requested QoS andthesubscriberQoSintheHLR).IfthereareseveralactiveIPflowsassociatedwith one PDP context, the QoS profile suitable for the most demanding flow should be selected. AsMSvendorsareintroducinganopenAPIforapplicationdevelopers,therequested QoSprofilefromanMScannotbefullytrusted.Forexample,iftheend-userusesapeer- to-peer (P2P) application for file download, there is no guarantee that this applicationQoS Functions in Core and Backbone Networks 217 will request an NRT QoS profile, as it probably should. To prevent the misuse of QoS profiles, Layer 4/7 lookup is a very useful asset. (E)GPRS and WCDMA have very different characteristics in terms of capacity and maximumthroughputperuser.Thus,knowingtheaccesstypewhenallocatingresources inthecorenetworkmayalsobeuseful.Forthisreason,3GPPR64hasintroducedthe radio access technology (RAT) field in the PDP context activation and PDP context updatemessagesbetweentheSGSNandGGSN.Furthermore,acellIDwasalsoadded to these messages. These parameters allow the edge of the network to modify the PDP context depending on RAT and access network capabilities. 6.2.2.2 RoamingandQoS-relatedissues GPRS roamingstandardsdefinetwo main alternativesofconnectingtoa GGSN when the MS is connected to a roaming SGSN: . The MS may be connected to a visitor network using the home GGSN. . The MS may be connected to a visitor network using the visitor GGSN. Inpractice,mostoperatornetworksareconfiguredsothatroamingusersconnecttotheir home network using their own home GGSN. The reasons for that are, among others, charging issues, APN configuration, languages used by different countries, etc. Asradioresourcesareexpensiveandlimited,theoperatorsmightwanttogivethebest resources to home subscribers instead of giving them to roaming users. Again, one important issue is how to avoid possible misuse of the real time PDP context (see Section 6.2.2.1) for roaming users, when the HLR, GGSN and everything behind the GGSNisinthehomenetwork.ThemostsuitableplaceforsolvingthisistheSGSN.The SGSNgetstheIMSIwiththepubliclandmobilenetwork(PLMN)Idduringattachand routing area update (RAU) from the mobile station. The PLMN Id consists of the combination of the mobile national code (MNC) and the mobile country code (MCC). Thesetwoparameters(MNCandMCC)areuniqueperoperator.Thus,theSGSNcould maintain a table of PLMN Ids that the operator wants to restrict and at the same time configure the maximumQoS profile for these PLMN Idusers. Inthis way the operator takes control over their network resources. That is, based on the PLMN Id, the SGSN can control the maximum QoS profile for roaming users. 6.2.3 Packetcoreandhigh-speeddownlinkpacketaccess(HSDPA) High-speed downlink packet access (HSDPA) provides a performance boost for WCDMA that is comparable with what EDGE does for GSM. It brings a two-fold airinterfacecapacityincreaseandafive-folddownlinkdataspeedincrease.HSDPAalso shortenstheroundtriptimeandreducesdownlinktransmissiondelayvariance. For packet core elements, HSDPA implies an extension of the QoS profile up to 16Mb/s maximum bit rate (MBR) and guaranteed bit rate (GBR) for a real time PDP Context. 3G SGSN and GGSN will add three new octets in the GPRS tunnelling protocol (GTP)-C negotiated QoS profile (known as ‘QoS2-negotiated’) according to 5 change218 QoS and QoE Management in UMTS Cellular Systems request (CR) 492 and 6 CR 796. Three new octets also need to be added to GTP for charging information according to 7 CR 031. Other 3G SGSN-related changes are: three new octets added to mobile application part (MAP) for communication between the 3G SGSN and HLR according to 8 CR 688 and to camel application part (CAP) for communication with charging when customised applications for mobile network enhanced logic (CAMEL) are used according to 9 CR 374. Finally, another change required for the GGSN is in the radius interface where the QoS profile is sent/received from/to a radius server. As a summary, it can be said that the changes required from packet core network elements to support HSDPA are pretty minor compared with those needed in radio network elements. 6.2.4 Trafficmanagement Packet core traffic management functions include packet classification and marking, queuing, scheduling and congestion avoidance mechanisms. There are three different QoS levels in packet core network elements, as shown in Figure 6.2: . The first is UMTS QoS that is related to PDP context specific QoS management. . ThesecondisGnandIu/GbtransportQoSthatincludes,forinstance,DiffServ edge functionality towards the radio and mobile packet core backbone. . ThethirdistheuserlayerQoSthatisaDiffServedgefunctionalitytowardsexternalIP networks. The SGSN and GGSN mark the Diffserv code point (DSCP) field of the transport IP headeraccordingtothePDPcontexttype.Table6.3providesanexampleofthemapping between the PDP context type and DSCP field in the transport IP header. Mapping of 3GPP QoS attributes to IETF DiffServ GGSN SGSN Gi UMTS QoS UMTS QoS User layer QoS Gb Iu-PS Gn/Gp Gb/Iu Gn Gn transport QoS transport QoS transport QoS Figure6.2 QoS function blocks in packet core elements.QoS Functions in Core and Backbone Networks 219 Table6.3 3GPP to DiffServ QoS mapping example. Classifier Action Traffic class THP ARP PHB DSCP Conversational – ARP1 EF 101110 Conversational – ARP2 EF 101110 Conversational – ARP3 EF 101110 Streaming – ARP1 AF41 100010 Streaming – ARP2 AF42 100100 Streaming – ARP3 AF43 100110 Interactive THP1 ARP1 AF31 011010 Interactive THP1 ARP2 AF32 011100 Interactive THP1 ARP3 AF33 011110 Interactive THP2 ARP1 AF21 010010 Interactive THP2 ARP2 AF22 010100 Interactive THP2 ARP3 AF23 010110 Interactive THP3 ARP1 AF11 001010 Interactive THP3 ARP2 AF12 001100 Interactive THP3 ARP3 AF13 001110 Background – ARP1 BE 000000 Background – ARP2 BE 000000 Background – ARP3 BE 000000 6.2.4.1 Trafficmanagementin3GSGSN In case network element load exceeds the service rate, a single queue at each internal congestionpointisnolongersufficient.Instead,adifferentqueueforeachtypeofservice (PDP context type) is needed to which independent latency, jitter and packet loss characteristics apply. Figure 6.3 shows the QoS traffic management functions in the 3G-SGSN. When IP packets arrive at the 3G SGSN from the GGSN, IP input scheduling is performed. IP input scheduling prioritises and schedules packets from ingress packet queues based on a DSCP. In Figure 6.4, an example of IP scheduling is given. In this example six queues are shown: one expedited forwarding (EF), four assured forwarding (AF) and one best-effort (BE). For more details on these IETF schemes see Section 6.3. When the packet arrives at the hardware driver, the access control list classifies the packet based onthe DSCPfield inthe IPheader. Then, the packetis sent tothe proper queue.Eachqueuehasaqueuemanagementfunctionwhichisresponsibleforestablish- ing and maintaining queue behaviour within the 3G SGSN and involves four basic actions: . Add a packet to the queue. . Dropthepacketifthequeueisfull. . Remove the packet when requested by the scheduler. . Monitor queue occupancy.220 QoS and QoE Management in UMTS Cellular Systems 3G SGSN • GTP tunnelling, downlink • Classification • UMTS QoS to DSCP mapping • Policing • Shaping GTP layer • GTP tunnelling, uplink • UMTS QoS to DSCP mapping • IP output scheduling • IP output scheduling • IP input scheduling • Classification • Classification • Classification IP layer • Queuing • Queuing • Queuing • Scheduling • Scheduling • Scheduling Network Network Network Network Driver layer interface interface interface interface Physical layer Gn Iu Figure6.3 QoS traffic management functions in 3G SGSN. Depending on the queue the packets belong to, queue management uses different congestion avoidance mechanisms. Figure 6.4 shows two of the most popular active queuemanagementschemes:randomearlydetection(RED)andweightedrandomearly detection (WRED). W1 EF queue AF4x queue W2 WRR Packets to the Incoming packets W3 AF3x queue packet socket buffer from HW driver WRED scheduling W4 AF2x queue W5 AF1x queue W6 RED BE queue W1 W2 W3 W4 W5 W6 Figure6.4 IP scheduling example. ACL classifierQoS Functions in Core and Backbone Networks 221 RED uses the average queue occupancy as an input to decide whether congestion avoidancemechanismsoughttobetriggered(thecommonactionbeingpacketdrop).As the average queue occupancy increases, the probability of dropping a packet also increases. . Foroccupancyuptoalowerthreshold min allincomingpacketsareaccepted(drop th probability is 0). . Abovemin theprobabilityofpacketdropriseslinearlytowardsaprobabilityofmax th p reached for a max occupancy. th . At max all incoming packets are dropped. th Averageoccupancyiscalculatedeverytimeanewpacketisreceived;itisbasedonalow- pass filter, or the exponential weighted moving average (EWMA), of instantaneous queue occupancy. The formula is: Q ¼ð1 WÞ Q þ Q  W ð6:1Þ avg q avg inst q where Q isaverageoccupancy, Q isinstantaneousoccupancyand W istheweight avg inst q ofthemovingaveragefunction.ThesevaluesaretypicallysetsothatREDignoresshort- termtransientswithoutinducingpacketloss,butreactsbeforeoveralllatencyisaffected or multiflow synchronisation of TCP congestion avoidance is experienced. Droppingincomingpacketsrandomlyandatanearlystageincreasesthelikelihoodof smoothingouttransientcongestionbeforequeueoccupancygetstoohigh.Randomising dropdistributionatearlystagesalsoreducesthechancesofsimultaneouslysubjecting multiple flows to packet drops. Queuemanagersarenotlimitedtoprovidingasingletypeofbehaviouronanygiven queue. Additional information from the packet context may be used to select one of multiplepacketdiscardfunctions.Aprecedencefieldcanbeusedforthemultiplepacket discard function as is the case in WRED. The idea is to give different min , max and th th max parameters for each RED instance. p There are other congestion control mechanisms such as RED with in/out, adaptive RED (ARED) and Flow RED (FRED), described in 10. Thenextstepafterprioritisationisscheduling,whichdictatesthetemporalcharacter- istics of packet departures from each queue. Since the type of service determines which queue the packet is placed in, the scheduler is then the main enforcement point of relative priority, latency bounds or bandwidth allocation between different traffictypes.Schedulingmechanismsmaybeclassifiedintwogroups:simplescheduling and adaptive scheduling. Thesimpleschedulinggroupincludesstrictpriorityandroundrobin(RR)scheduling. A strict priority scheduler orders queues bydescendingpriority andservesaqueueofa given priority level only if all higher priority queues are empty. RR scheduling, on the other hand, avoids local queue starvation by cycling through the queues one after the other, transmitting one packet before moving on to the next queue. Fromaserviceprovisioningperspective,beingabletomaintainpre-definedbandwidth allocations to various traffic types that share a common link (CPU link or outbound link)isveryoftenneeded.NeitherstrictprioritynorRRschedulerstakeintoaccountthe number of bits transmitted each time a queue is served. A number of scheduling222 QoS and QoE Management in UMTS Cellular Systems EF W=SP AF4 W1 AF3 SP W2 WRR scheduler W=SP scheduler AF2 W3 AF1 W4 BE W=SP W1W2W3W4 Figure6.5 Cascade scheduler. algorithmshavebeendeveloped tomeetthisneed–suchasdeficitroundrobin(DRR), weighted round robin (WRR), fair queuing (FQ) and weighted fair queuing (WFQ). In general, these two types of scheduling alone are not enough for the 3G SGSN, as some of the flows may have very tight QoS requirements that can only be met with SP scheduling. On the other hand, AF queues typically do not have such tight QoS requirements and therefore WRR is a more suitable scheduler for them. Figure 6.5 shows an example of cascade scheduling. In this case a combination of WRR and SP scheduling is used to accommodate EF and AF classes. The GTP process classifies the packet according to which PDP context it belongs to. ForalltraffictheDSCPfieldintheIPheadermaybechangedaccordingtoPDPcontext attributes and the router configuration at the Gn or Iu interfaces. Also, for roaming subscriber PDP contexts, metering and policing is done. Metering and policing func- tionalities are described in Section 6.2.4.2. After the packet has been processed by the GTP layer, it is sent to the IP stack that forwards it to the right interface using a similar type of scheduling to that presented earlier. Note that outbound scheduling can be done per physical or logical interface. If the outbound interface is an ATM, then different QoS mechanisms can be used as explained in Section 6.3.4. 6.2.4.2 TrafficmanagementinGGSN Thescheduling,queuingandprioritisationofIPtrafficintheGGSNaretypicallydonein asimilarwaytothatdoneinthe3GSGSN(seeSection6.2.4.1).SincetheGGSNisthe edge element for GPRS and UMTS PS services, metering and policing for downlink traffic are key functionalities. The GTP level classification identifies the PDP context the packet belongs to. PDP context specific QoS attributes are then used for QoS-related traffic management func- tionsontheGTPlayer.Themeteringfunctionensuresthatdownlinktrafficconformsto thenegotiatedbitrateatthePDPcontextlevel.Thetrafficconditionerfunction,whichis part of the metering function, is the actual component providing the conformance of downlink user data traffic.QoS Functions in Core and Backbone Networks 223 Analgorithmforbitrateconformancedefinitionwaspresentedin10.Thealgorithm isknownasa‘tokenbucket’.Inthiscontext‘token’representsthealloweddatavolume (e.g., a byte). ‘Tokens’ given at a constant ‘token rate’ by a traffic contract are stored temporarily in a ‘token bucket’ and are consumed by accepting the packet. This algo- rithm uses the following parameters: . Token rate r (as a maximum bit rate/guaranteed bit rate). . Bucket size b (combination of the maximum bit rate/guaranteed bit rate and the maximum SDU size). . Token bucket counter (TBC): the number of given/remaining tokens at any time. The TBC is usually increased by r in each time unit. However, the TBC has an upper bound b(tokenbucketsize)andthevalueoftheTBCshallneverexceed b.Whenpacket p withlengthl arrives,thereceiverchecksthecurrentTBC.IftheTBCvalueisequalto 1 1 orlargerthanl ,packetarrivalisjudgedcompliant–thatis,thetrafficisconformant.At 1 thismomenttokenscorrespondingtothepacketlengthareconsumedandtheTBCvalue decreasesby l .Thesamehappenstopacket p withlength l inourexample.However, 1 2 2 for packet p , the TBC is below l and packet arrival is considered non-compliant (the 3 3 trafficisnotconformant).Inthiscase,thevalueofTBCisnotupdatedandthep iseither 3 dropped or forwarded to the shaper. IfthepacketisnotcompliantandbelongstoaNRTPDPcontext,itisbuffereduntilit becomes compliant or until the time to live (TTL) has expired. Non-compliant packets belonging to an RT PDP context are dropped. ThelastfunctionoftheGTPlayerismarkingtheIPheaderDSCPfieldaccordingto the PDP context QoS profile. Also, for uplink traffic the DSCP field can be marked in order to enable consistent traffic differentiation behind the Gi interface. 6.2.4.3 Trafficmanagementin2G-SGSN Thissectiondescribestrafficmanagementinthe2G-SGSNand,morespecifically,active queue management techniques that may be implemented in that element (see 11 for more details). In addition to playing a central role in, say, session, mobility and charging manage- mentprocedures,the2G-SGSNactsasabufferfortheradioaccessnetwork.Thatis,the 2G-SGSN shall temporarily hold downlink packets (instead of forwarding them im- mediately) if the BSC is not able to receive them due to, say, lack of own-buffer space. The mainbenefitofthis approach istoavoidplacingtoohigh memoryrequirements at theBSC.Thisflowcontrolprocedurebetweenthe2G-SGSNandtheBSCisspecifiedin 3GPP 3 and 12 (see Figure 6.6). Therearethreedifferentflowcontrollevels.ThefirstistheBSSGPvirtualconnection (BVC)flowcontrol,whichreferstothecelllevel.Incasetheavailablebufferspaceinthe BSCreservedforaparticularBVCdropsbelowacertainthreshold,theBSCwillsignal the 2G-SGSN to reduce its sending rate for the traffic accessing that BVC. The second levelisMS-specificflowcontrol.Again,iftheavailablememoryintheBSCreservedfora particularMSgetstoolow,the2G-SGSNwillreducethesendingrateforthatparticular224 QoS and QoE Management in UMTS Cellular Systems PFC flow control PFC flow control PFC flow control MS flow control MS flow control MS flow control BVC flow control BSS Figure6.6 Flow control levels in the 2G-SGSN applied to every LLC-PDU 12. MS.Thelast(optional)levelisthePFCthathandlesflowswithinacertainMSthathave specific QoS requirements. As specified in 12, the 2G-SGSN will apply these flow control tests to every logical linkcontrol–packetdataunit(LLC-PDU):flowcontrolisperformedoneachLLC-PDU firstbythePFCflowcontrolmechanism(ifapplicableandnegotiated),thenbytheMS flow control mechanism and last by the BVC flow control mechanism. This flow control approach has the following benefits: . It prevents downlink traffic overflow (i.e., packet drops) in the BSC. . Itensuresthatacertaincongestedcell,MSorPFC,willnotcreateunnecessarydown- linkbufferdelayinthe2G-SGSNforotherflowsaccessingnon-congestedcells,MSsor PFCs. So,inhigh-loadconditions,the2G-SGSNwillbethemainelementinchargeofhandling potentialexcessdownlinktrafficintheBSS.Or,inotherwords,the2G-SGSNdownlink buffer is a potential traffic bottleneck since overload may not only be caused by 2G- SGSN or Gb capacity limitations but also by cell or even MS congestion which are indeed more common scenarios. This is illustrated in Figure 6.7. One way to deal with potential buffer delay is to prioritise the traffic based on how delay-sensitiveitis,asexplainedindetailforthe3G-SGSNinSection6.2.4.1.Likewise,QoS Functions in Core and Backbone Networks 225 flow control BTS BSC GGSN 2G SGSN Application servers Figure6.7 2G-SGSN is a potential traffic bottleneck in loaded conditions. in the 2G-SGSN, the traffic (LLC packets) from different TCs may be handled in separate buffers (as shown in Figure 6.8). A weighted fair queuing scheduler may then allocate a certain share of the output capacity to each buffer. Although QoS-based queuing and scheduling may lower or even eliminate buffer delays for the highest priority classes (i.e., real time traffic), the lowest priority classes (i.e.,non-realtimetraffic)arethenevenmorelikelytoexperiencelongdelays(depending onthetrafficmix).Measurementsperformedinlive(E)GPRSnetworkstypicallyconfirm that end-to-end latency grows with network load. Thus, some other mechanisms to controlorreducebufferdelaysfornon-realtimeTCareneededin2G-SGSNtooptimise both end-user experience and spectral efficiency. It should be noted that, although the same applies to any other core network element (e.g., GGSN, 3G-SGSN or backbone Weighted fair queuing PFC flow control MS flow control BVC flow control Packet scheduler CIR control NS-VC to hardware driver Figure6.8 2G-SGSN traffic prioritisation. Signalling queue RT traffic queue NRT traffic queue NRT traffic queue NRT traffic queue NRT traffic queue226 QoS and QoE Management in UMTS Cellular Systems routers), the buffer delay issue is typically most acute and also in a way specific to the 2G-SGSN because of the standard flow control between the radio or core network domains.Therefore,specificnon-classicalapproachestosolvethisbufferdelayproblem arealsoworthinvestigating.Thefirstwaytocontrolbufferdelaysinthe2G-SGSNisto introduce a pre-defined lifetime for LLC frames. The idea is very simple: after having spentacertainpre-definedtimeinthe2G-SGSNand/orBSCbuffers,theLLCframewill be discarded. Such a mechanism is available by default in most router-like network elementsinordertoensurethatpacketsthataretoooldareremoved.Inthe2G-SGSN, thisschemecanalsohelptoguaranteeacertainmaximumbufferdelaydependingonthe TC considered. The objective is to find the right trade-off between high resource utilisation and optimised end-user throughput. For instance, network utilisation may beaffectedifthepacketlifetimeissettoolow.Ontheotherhand,atoolargelifetimemay degrade end-user throughput because of high latency. In order to avoid unnecessary packet drops at the BSC, LLC frames successfully sent from the 2G-SGSN to the BSC should be given at least a pre-defined minimum lifetime – that is, the total LLC frame lifetimeshouldbesplitbetweenthe2G-SGSNandtheBSC.Moreover,sinceflowcontrol cannot provide any delay bounds (and there is no active queue management at the BSC), we should also introduce a pre-defined maximum lifetime for LLC frames at the BSC. Another way to limit buffer delays is simply to limit the buffer size. It very much resemblesthepreviousapproachalthoughinthiscasetheoutputinterfacespeedshallbe known in order to predict the maximum buffer delay. What complicates things in this respect in the 2G-SGSN is the multilayer flow control presented earlier. For instance, although the output link speed of the 2G-SGSN would allow immediate forwarding of received packets, some packets may have to be buffered because the BSC is unable to acceptthem.Thus,extractingamaximumbufferdelayoutofthe2G-SGSNbuffersizeis not easy. A third, more advanced approach is torandomly drop packets before the buffer gets full orbefore the packet lifetime expires. TheREDalgorithm 16 (already described in detail in Section 6.2.4.1), which does exactly this, is probably the most popular active queue management scheme used nowadays. As mentioned, the RED algorithm drops arriving packets probabilistically. The probability of packet drop increases as the estimated average queue size grows. RED responds to a time-averaged queue length, notaninstantaneousone.Thus,ifthequeuehasbeenmostlyemptyinthe‘recentpast’, REDisnotlikelytodroppackets(unlessthequeueoverflows).Ontheotherhand,ifthe queue has recently been relatively full, indicating persistent congestion, newly arriving packets are more likely to be dropped. An improvement of RED called ‘explicit congestion notification’ (ECN) was later introduced. As stated in 13, explicit congestion notification allows a TCP receiver to inform the sender of congestion in the network by setting the ECN-Echo flag upon receiving an IP packet marked with the congestion experienced (CE) bit(s). The TCP sender will then reduce its congestion window. Thus, the use of ECN is believed to provide performance benefits 14. Reference 15 also places requirements on inter- mediate routers – for example, active queue management and setting of the CE bit(s)QoS Functions in Core and Backbone Networks 227 in the IP header to indicate congestion. Therefore, the potential improvement in per- formancecanonlybeachievedwhenECN-capableroutersaredeployedalongthepath. We also note that numerous variants of RED and ECN have been proposed 16. RED and ECN implementation could, in principle, take into account the 2G-SGSN multilayer flow control mechanism. That is, one instance of RED or ECN could be appliedtoeachindependentflowcontrolentity.Asanillustration,iftheREDthreshold ina2G-SGSNbufferisexceededmostlyduetoafewcongestedcells(BVC),itdoesnot necessarily mean that packets accessing other non-congested cells – buffered in the 2G- SGSNforsuchotherreasonsasGbcapacitylimitation–shouldberandomlydroppedby thesamerules.However,amultilayerRED(orECN)approachinthe2G-SGSNwould add significant complexity and require extra CPU and memory, while practical per- formancegainsarenotsoclear.Thisisbecause,althoughpacketsmaynotbebufferedin the 2G-SGSN for the same reasons (e.g., MS vs. Gb vs. BVC congestion), they all indicatesomesortofcongestionaswellasapotentialsignificantbufferdelay(e.g.,BSC buffersarefullandcannotacceptanymoredata).Thestateoftheart16recommends bufferdelaytobeonlyafractionoftheroundtriptime,andthusitisprobablyagood idea to allow only a relatively small buffer delay in the 2G-SGSN. As a conclusion, it seems the potential performance gains of applying one separate instance of RED (or ECN) to each independent flow control entity do not justify the required extra complexity. Anotheralternativetothe2G-SGSNistofollowaTTL-basedREDapproachsince, as explained above, it is not straightforward to relate 2G-SGSN buffer occupancy and buffer delay. There are two possible implementations for a TTL-based RED approach: . Inthefirstthepacketischeckedperiodicallyandifitsage(currenttimelesstimestamp) exceedsathreshold,thepacketisrandomlydropped(ormarkedifECNisusedandthe flow supports ECN). . The second implementation is a bit simpler. Here the packets are given a random lifetime(inadditiontothedeterministiclifetimethatisusedinboththe2G-SGSNand BSC) when they enter the 2G-SGSN. As in the first implementation, the packet is periodically checked to see whether it should be discarded – that is, if the age of the packet(currenttimelesstimestamp)exceedsitslifetime.Ifthatisthecase,thepacketis dropped (or marked if ECN is used and the flow supports ECN). The last congestion control alternative that we consider here is called ‘window pacing’. With this scheme, the router can decrease the TCP-advertised window value in uplink TCP acknowledgements if the defined buffer filling level threshold for a specific TC is reached.Adecreased,advertisedwindowvalueforcesthesendingTCPtoslowdownthe transmissionspeed,sincethesendingwindowistheminimumofcongestionwindowand advertised window. Simulationswereperformedinordertoevaluatetheefficiencyofsometheaforemen- tioned schemes in decreasing buffer delay and improving global throughput. The simulator is described in Figure 6.9.228 QoS and QoE Management in UMTS Cellular Systems GPRS pass GPRS Agent agent agent GPRS agent GPRS GPRS GPRS Pass Agent agent agent agent GTP IP LLC Servers IP BSC GGSN - Pass the packet ME - Insert/remove - Home for this GTP headers cell users GPRS queue & SAR connector at both ends - RLC block segmentation and SGSN Cell/MS reassembly - Insert/remove GTP headers - LLC segmentation - Air interface with and reassembly - LLC segmentation and reassembly flow- based round robin and TDMA - LLC retransmissions - LLC retransmissions - MS flow control for SGSN - MS, cell and CIR-specific flow control Figure6.9 The end-to-end (E)GPRS simulator 11. Reproduced by permission of IEEE 2006. TheGPRSagenthasfourdifferentinstances:GGSN,SGSN,BSCandcell.Moreover, a‘passagent’ isneededforeachTCP/UDP/sinkagent.ThemainfeaturesoftheGPRS model are: . MS/cell/committed information rate (CIR) flow control in 2G-SGSN downlink queuing. . Dynamic BSSGP flow control (BSC2G-SGSN) for MS and BVC flows. . LLC retransmissions (if needed). Thesimulatormakesuseofpubliclyavailablens-217modulessuchasTCP(NewReno) and traffic sources. Figures 6.10 and 6.11 illustrate the end-to-end delay and TCP throughput (i.e., goodput) experienced by end-users with various congestion control schemes. In this scenario, we have: . Five active streaming (mean bit rate of 40kb/s, maximum bit rate of 80kb/s, UDP used as transport protocol) users using the streaming TC.

Advise: Why You Wasting Money in Costly SEO Tools, Use World's Best Free SEO Tool Ubersuggest.