Tech Report NWU-EECS-09-05

download Tech Report NWU-EECS-09-05

of 14

Transcript of Tech Report NWU-EECS-09-05

  • 8/6/2019 Tech Report NWU-EECS-09-05

    1/14

    Electrical Engineering and Computer Science Department

    Technical Report

    NWU-EECS-09-05

    April 16, 2009

    EmNet: Satisfying The Individual User ThroughEmpathic Home Networks

    J. Scott Miller John R. Lange Peter A. Dinda

    Abstract

    Current network control systems, including quality of service systems in general, baseoptimization decisions on assumptions about the needs of a canonical user. We propose

    focusing optimization on the needs of individual users. This approach has the potentialto increase overall user satisfaction while using fewer resources. We apply the approach

    to home networks. A careful user study clearly demonstrates that measured end-usersatisfaction with a given set of home network conditions is highly variableuser

    perception and opinion of acceptable network performance is very different from user touser. To exploit this fact we design, implement, and evaluate a prototype system, EmNet,

    that incorporates direct user feedback from a simple user interface layered over existingweb content. This feedback is used to dynamically configure a weighted fair queuing

    (WFQ) scheduler inside a broadband router that schedules the wide-area link. Weevaluate EmNet in terms of the measured satisfaction of end-users, and in terms of the

    bandwidth required. We compare EmNet with an uncontrolled link (the common casetoday), as well as with statically configured WFQ scheduling. On average EmNet is able

    to increase overall user satisfaction by 24% over the uncontrolled network and by 19%over static WFQ. EmNet does so by only increasing the average application bandwidth

    by 6% over the static WFQ scheduler.

    Effort funded by the National Science Foundation under awards CNS-0720691, CNS-0721978, and CNS-

    0709168. Lange was partially funded by a Symantec Research Labs Fellowship.

  • 8/6/2019 Tech Report NWU-EECS-09-05

    2/14

    Keywords: Network Management, Human-Computer Interaction, Networks, Empathic Systems,Human Factors

  • 8/6/2019 Tech Report NWU-EECS-09-05

    3/14

    EmNet: Satisfying The Individual User ThroughEmpathic Home Networks

    J. Scott Miller John R. Lange Peter A. Dinda

    {jeffrey-miller,jarusl,pdinda}@northwestern.eduNorthwestern University

    ABSTRACT

    Current network control systems, including quality of service sys-tems in general, base optimization decisions on assumptions aboutthe needs of a canonical user. We propose focusing optimizationon the needs ofindividual users. This approach has the potential toincrease overall user satisfaction while using fewer resources. Weapply the approach to home networks. A careful user study clearlydemonstratesthat measured end-user satisfaction with a givenset of

    home network conditions is highly variableuser perception andopinion of acceptable network performance is very different fromuser to user. To exploit this fact we design, implement, and evalu-ate a prototype system, EmNet, that incorporates direct user feed-back from a simple user interface layered over existing web con-tent. This feedback is used to dynamically configure a weighted fairqueuing (WFQ) scheduler inside a broadband router that schedulesthe wide-area link. We evaluate EmNet in terms of the measuredsatisfaction of end-users, and in terms of the bandwidth required.We compare EmNet with an uncontrolled link (the common casetoday), as well as with statically configured WFQ scheduling. Onaverage EmNet is able to increase overall user satisfaction by 24%over the uncontrolled network and by 19% over static WFQ. Em-Net does so by only increasing the average application bandwidthby 6% over the static WFQ scheduler.

    1. INTRODUCTIONIncreasingly, computer systems and networks are used to run in-

    teractive applications, where the effects of control decisions areperceived directly by the user. Of course, a large body of workin quality of service exists that investigates how to control systemsand networks such that they provide acceptable application-levelperformance to the user. However, it has been demonstrated incontexts outside of networks, such as CPU scheduling and powermanagement, that actual measured user satisfaction with any givenoperating point exhibits considerable variability across users [9, 19,6]. Does user satisfaction with network control decisions also ex-hibit this property?

    Measuring individual user satisfaction onlineand then using suchmeasurements directly in the control process makes it possible toexploit this variation to the mutual benefit of users and systems.This result has been demonstrated, again, through user studies, insystems-level control as diverse as power management, scheduling,and remote display systems [21, 18, 16, 17, 14]. Further, it is be-coming increasingly clear that such measurement can be done withminimal intrusion on the user, for example through the use of bio-

    Effort funded by the National Science Foundation under awardsCNS-0720691, CNS-0721978, and CNS-0709168. Lange was par-tially funded by a Symantec Research Labs Fellowship.

    metric information [22]. Does it make sense to leverage direct userfeedback in controlling networks?

    We claim that the control of networks can benefit from the ideaof exploiting measurements of individual user satisfaction. In par-ticular, we propose to associate a user satisfaction level with eachnetwork flow, or identifiable group of flows, relevant to an inter-active application. Conceptually, picture each packet as having auser satisfaction level stamped onto it, much like an ECN bit, andnetwork elements reading this information as the packet passes.

    To evaluate this claim, we consider the context of the home net-work and its connection to the larger Internet. Previous studieshave shown that this is a challenging and important environmentin an of itself [1, 11]. In particular, hosts and devices within thehome network talk to the Internet via a broadband router, wherethe upstream link typically has much lower bandwidth than eitherthe home network, or the rest of the path through the Internet [7,13]. Furthermore, the size of the home network in terms of de-vices and users is growing, especially with the explosion of wire-less. Home users are also increasingly running applications that usemany long-lived and high-bandwidth flows alongside their interac-tive applications, exacerbating the bottleneck upstream link. Howto schedule this link to provide a high degree of user satisfactionfor interactive applications, while providing excellent service for

    the other applications, is the problem we consider. We elaborate onthe issues of shared home networks, and related work in this area,in Section 2.

    Does scheduling this link really need to take into account theindividual interactive user or users in the household? We conduct acarefully constructed user study in which participants use a range ofinteractive web-based and other interactive applications while wevary the nature and degree of cross-traffic representing the trafficof other users in the household. Our participants represent a broadspectrum of individuals at a private university. Section 3 describesour study in detail.

    Although our study includes numerous qualitative and quantita-tive contributions, one result is clearly the most important. Thatresult is that there is a large degree of variation in expressed usersatisfaction for identical application/cross-traffic scenarios. This

    holds across the wide range of application/cross-traffic scenarioswe studied. It is an inescapable conclusion that users react in dis-tinct and highly individual ways to cross traffic. It is not the casethat a per-application utility model, or other aggregate, captures thisvariation.

    Optimizing a home network by using an objective function thatdoes not express this variation among interactive users misses twoopportunities. First, it is likely to result in some interactive usersbeing more dissatisfied than they need be, while others are over-provisioned. Second, it is likely to result in lower performance for

  • 8/6/2019 Tech Report NWU-EECS-09-05

    4/14

    non-interactive flows than is necessary.Having found opportunity in this variation across individual users,

    we next design and implement a system, EmNet, that providesa mechanism for individual user satisfaction feedback for web-based and other interactive applications. The system, which is de-scribed in detail in Section 4, schedules the outgoing link usingweighted fair queuing [2, 23]. Non-interactive traffic is given aspecific weight, and the weight of each users traffic has upper andlower bounds. The user can at any time adjust a slider control that

    is overlaid on top of their current web page or desktop environment.By raising the slider, the user increases the weight of her traffic, butat a cost. The slider control displays the current monetary cost ofthe users bandwidth usage and weight/provisioning.

    We evaluate EmNet through an extensive user study on a differ-ent set of participants from a wide range of backgrounds, as well asother methods. Our evaluation is presented in detail in Section 5.The most important results are the following.

    EmNet increased the average user satisfaction by 24% overan unprovisioned network, and by 19% over a statically pro-visioned network.

    EmNets increase in satisfaction came at the cost of only a6% increase in application bandwidth compared to a stati-

    cally provisioned network.

    These results suggest that EmNet successfully allows individualusers to personalize their home network performance, trading offperceived satisfaction and cost. This personalization is beneficialboth for the individual users and for the network as a whole.

    The lessons learned in our study of user satisfaction with homenetwork performance, and from our EmNet system, suggest a prin-ciple. Perhaps measurements of individual user satisfaction, orother individual user feedback could be incorporated into networkprocessing more broadly.

    A summary of this work can be found elsewhere [15].

    2. CHALLENGES OF HOME NETWORKSWe characterize a home network as a small collection of end-

    hosts connected by a high-speed wired or wireless network andsharing a residential broadband connection (e.g., DSL, Cable, etc.).Home networks are ubiquitous: in 2007, the Consumer ElectronicsAssociation reported that 30% of households in the United Statesown home networking equipment [5]. The Pew Internet and Amer-ican Life Project found that in 2008, 55% of adult Americans havebroadband Internet access at home [10]. It is safe to say that thequality of the application-level experience of millions of Internetusers is in some way influenced by the quality and performance ofa home network.

    Home networks are a microcosm of the broad range of networkapplications available to end-users. The Pew study also revealedthat in a typical day, 58% of respondents Use an online search en-gine, 20% Watch a video on a video-sharing site like YouTube

    or Google Video, 16% Send instant messages, 4% Download apodcast, and 3% Download or share files using peer-to-peer net-works such as BitTorrent or LiveWire. Other applications foundon a home networks include wide-area networked gaming, remotestorage and backup, software updates, and voice over IP (the studylacked data on these applications).

    A large scale measurement study of broadband hosts conductedby Dischinger et. al. [7] found that the complexity of residentialconnections extends beyond their throughput characterization. Inaddition to their (varied) limits in throughput, broadband links typi-cally impose an additional delay (with most residential links adding

    an additional 100ms to packet RTT when compared to the last-hop router), and substantial loss rates. Throughput is often heav-ily asymmetric and oftentimes unstable. Using a smaller data set,Claypool et. al. [4] found evidence of substantial variation betweenthe queue size of DSL and Cable providers, leading to varying lossrates and packet delay between ISPs. The work suggests that net-work provisioning strategies targeted at the wide-area might be in-sufficient to provide high-quality service for the home network.

    HomeMaestro. The role of the home network is to provide net-work connectivity for a small number of users and their heteroge-neous set of applications under varied and at times unreliable con-straints. HomeMaestro [1, 11] seeks to mitigate home network con-tention by using a distributed approach to identify competing flowsand allocate network resources fairly to end-hosts. To motivate theneed for special intervention, the authors conduct a study of sev-eral representative home networks, collecting network traces alongwith user accounts of dissatisfaction. Surprisingly, the householdsreport daily issues with the network across a range of applications.The HomeMaestro approach is able to provide approximately fairallocations according to a set of application level weights. The set-ting of the application weights is left as an open question and theauthors acknowledge that these weights are likely to be user and

    application dependent.We demonstrate one possible way such weights can be deriveddirectly from end-user feedback.

    OneClick. Recently, Tu et. al. [24] have described OneClick,a system allowing end-users to signal dissatisfaction with networkperformance. Thefeedback is used to constructbounds on network-level quality of service metrics required for acceptable application-level performance. The approach can be applied to applicationsin which it is difficult to quantify the effects of the network onperformance. While the authors analysis point out some of thechallenges of collecting user satisfaction levels, the individual userfeedback is largely ignored with the goal being to inexpensivelymeasure mean satisfaction level, the Mean Opinion Score conceptfrom various ITU recommendations on digital audio and video.

    Our work characterizes theextent to which variation across userscan be seen in user satisfaction with home network conditions. Fur-ther, we also demonstrate how to leverage this variation in control-ling the home network to improve overall satisfaction.

    3. VARIATION IN USER SATISFACTIONGiven the intriguing results in other domains mentioned in Sec-

    tion 1, a natural question is whether actual measured user satisfac-tion given particular network conditions also exhibits significantvariation. To answer this question in the context of home networks,we conducted a rigorous user study in which a wide range of usersparticipated. Each was asked to run a set of network applicationson an emulated home network, where a variety of cross-traffic sce-

    narios were applied, and prompted for their satisfaction. We foundthe question could easily be answered in the affirmativevariationin user satisfaction is large. Furthermore, this variation is not ex-plained by variation in application-level QoS measures.

    3.1 InstrumentationWe created a user interface that can be transparently applied to

    existing web applications. Our approach is inspired by AjaxS-cope [12], a tool for dynamically instrumenting client-side, web-based JavaScript code for Web 2.0 applications. AjaxScope wasdesigned to capture application-level performance metrics aggre-

  • 8/6/2019 Tech Report NWU-EECS-09-05

    5/14

    Figure 1: User interface: The satisfaction prompt appears over

    the web page periodically during the studies.

    gated from a large number of users, with each user only runninga portion of the application with instrumented code. We applied asimplified version of this technique to inject our own instrumenta-tion into existing web pages and applications, with each page andapplication receiving identical modifications. Like AjaxScope, weuse a web proxy to modify existing pages before they are sent tothe client. The user need only point her existing web-browser tothe proxy server in order to receive the instrumented interface.

    The code added to the client page exists in a single JavaScriptclosure and, aside from the UI elements added to the pages doc-ument object model, shares no global state with other code on thepage. We wired our code to browser events as to minimize the po-

    tential disruption to code executed either before or after our own.The UI communicates with the proxy server using asynchronouscall-backs to well-known control URLs that are intercepted by theproxy. From the browsers point of view, these URLs exist underthe current pages domain and thus do not pose a security threat thatmight prevent the asynchronous calls from being executed. The in-strumentation loop is wholly separate from the existing application.

    When the proxy server, which consists of 1400 lines of Pythoncode, receives a request, the URL is first examined to determine ifthe request is from the injected instrumentation. If it is, the request,along with any posted data, is dispatched to a handler to either logthe data or perform any necessary computation or network control.If this is not the case, then the proxy recursively requests the targetURL with the appropriate HTTP method (GET, POST, etc.). Whenthe request returns, the proxy examines the HTTP headers. If thecontent-type header indicates that the requested item is an HTMLpage, the proxy buffers the page and calls a handler to add the nec-essary instrumentation. The final page is then returned to the client.If the page is of a different content type (e.g., binary data), then theproxy merely forwards the data directly to the client.

    We use this instrumentation in the study described in this sec-tion, and in the study described in Section 5. For the present study,we created a simple user interface that asks the user to give theirsatisfaction with the network performance during the last 30 sec-onds. The prompt is placed over the current web document on topof an obscuring layer that prevents the user from browsing whilethe prompt is visible, as can be seen in Figure 1. During our exper-imentation, we have control over when the prompt is presented viaan API exposed by the proxy server. The prompt does not disappear

    until the user has selected a satisfaction level.

    3.2 TestbedOur testbed is designed to emulate present-day home-networking

    configurations with the additional abilities to (a) add instrumenta-tion as described in the previous section, and (b) add cross traffic.The testbed is further extended with traffic scheduling capabilitiesfor the study of the study of Section 5.

    The testbed is shown in Figure 2. We emulate a broadbandrouter using a layer 2 Ethernet bridge configured to match the band-width characteristics of a typical home broadband connection. The

    Figure 2: Network testbed used for the user studies. The

    testbed is configured in a standard dumbbell topology with web

    traffic traveling through a proxy serverbefore reaching the bot-

    tleneck link.

    bandwidth on the bridge router is restricted to a 3 Mbps download

    limit and a 796 Kbps upload limit (i.e., DSL speeds). The bridgeserver runs RedHat Enterprise Linux ES 4.7 and is configured us-ing IPRoute2. We chose not to emulate a NAT based router to avoidany performance penalties, but instead filter out all traffic not des-tined to internal machines using IP Address based firewall rules.

    We emulate the home users LAN environment using a 1 Gigabitswitch uplinked to the bridges internal NIC. Another 1 Gigabitswitch is placed between thebridgeand the Internet link for reasonswe will discuss shortly. The uplink port on the external switch wasconnected to a university network for Internet connectivity.

    The LAN environment consists of 3 machines: (1) a client ma-chine used by the test subject, (2) a web proxy server that imple-ments instrumentation as described in the previous section, and (3)a cross traffic generator used to provide configurable backgroundtraffic. The client machine is configured to use the web proxy. Be-

    cause the web proxy is on the internal network, it is not subject toexternal network fluctuations. Further, it can cleanly measure bothclient-side response time, and variation in performance of the webservers it interacts with. The proxy server runs Red Hat EnterpriseLinux ES 4.7.

    The cross-traffic in our testbed is created using a client machineon the internal network and a server machine connected to theswitch on the external network. A local cross-traffic server allowsus to accurately repeat specific cross-traffic scenarios. The cross-traffic is generated using a modified version of IPerf [20, 3] thatprovides support for parallel connections, time based traffic gener-ation, and specific bandwidth usage. Both the cross-traffic clientand server were configured with RedHat Enterprise Linux ES 4.7.

    Allof theservers areIBM x335 computers(Dual 2.0 GHz Xeons,

    1.5 GB RAM, 40 GB IDE disk, dual Gigabit NICs). The switchesare Netgear GS108 8 port Gigabit switches. The test subject ma-chine is a Lenovo Thinkpad T61 (T7500, 2 GB RAM, 100 GB IDEdisk, 14.1 inch 1400x1050 display, Gigabit NIC) which runs Win-dows XP Professional SP2. The web browser used is Firefox 3.0.2.Caching is disabled so that fetches always reach the web proxy.

    3.3 Users, applications, and scenariosWe recruited subjectsfrom the general population within a medium-

    sized private university. Formal approval by our Institutional Re-view Board allowed us to advertise widely across the university

  • 8/6/2019 Tech Report NWU-EECS-09-05

    6/14

    Application Familiarity % Usage

    Peer-to-peer 5.05 83%Web Browsing 6.74 100%Voice over IP 4.32 67%

    Streaming video 5.84 94%Interactive web apps 6.53 94%

    Instant messaging 6.63 100%

    Figure 3: Application familiarity and usage information for our

    study population. Application familiarity was rated on a 17scale, with 1 meaning Never Used/Not Familiar and 7 mean-

    ing Very Familiar. We report usage as the percentage of re-

    spondents who use the application at least monthly.

    using a combination of fliers and email advertisements. Subjectswere paid $20 for their time.

    The study1 involved 20 participants including 13 men and 6 women(one participant did not choose to answer the optional demographicquestions). Each participant filled out a short survey measuringtheir use and familiarity with using various applications on a homenetwork. Overall, 19 of the 20 participants have experience using ahome network (11 report connecting using a Cable Modem while 8

    report DSL). We present a detailed view of the participants experi-ences in Figure 3. The participants frequently use a wide-range ofapplications on home networks.

    Each of our subjects used three different network applications,as explained below.

    Wikipedia: Participants are given a set of thirty questions to an-swer using Wikipedia [26]. Participants are instructed to completeas many as they can before time runs out. Both in-memory anddisk-based caching are disabled during the task to minimize order-ing effects within the study. The task is representative of commonweb-browsing, where traffic is aperiodic and bursty.

    Image labeler: Participants play theGoogle Image Labeler game [8],a two-player web-based game in which players assign labels to im-ages found on the Web. Players attempt to agree on a label for an

    image without either player having knowledge of the labels chosenby the other user. The next image is presented when both playersgive the same label and players are scored based on the number andspecificity of labels agreed upon. The application generates a largenumber of small, asynchronous web requests that communicate thegame state.

    Streaming video: Participants watch a streaming video. Thevideo is compressed using the MPEG-4 codec such that the re-sulting bandwidth averages 2.9Mbps. Both the client and serverare running the Video LAN Client (VLC) [25], version 0.8.5. Thevideo is streamed over UDP without buffering using the MPEG-TS encapsulation method. The stream is unbuffered, such thatpacket loss introduces immediate effects that vary depending onthe amount of data lost.

    We consider 16 different cross-traffic scenarios, which are pre-

    sented in Figure 4. All scenarios are bulk transfers, that is, we donot explore the effect of competing interactive traffic.

    3.4 Study designFor each subject, a random ordering of applications was chosen,

    and then, for each application, a random ordering of scenarios waschosen. The subject began by reading and signing a consent form.

    1The formal study documents, including protocol and survey in-struments, will be made available online for the final version of thepaper.

    No. ofName Direction Flows Bandwidth

    Up.1 Upload 1 100KbUp.2 Upload 1 300KbUp.3 Upload 1 500KbUp.4 Upload 1 768KbUp.5 Upload 4 100Kb/conUp.6 Upload 4 300Kb/conUp.7 Upload 4 500Kb/con

    Down.1 Download 1 100KbDown.2 Download 1 500KbDown.3 Download 1 1MbDown.4 Download 1 3MbDown.5 Download 4 1 00Kb/conDown.6 Download 4 5 00Kb/conDown.7 Download 4 1 Mb/con

    Mixed.1 D ownload 4 1 Mb/con,Upload 4 200Kb/con

    Mixed.2 D ownload 8 1 Mb/con,Upload 8 200Kb/con

    Figure 4: Cross-traffic scenarios.

    Next, he filled out an introductory questionnaire including demo-graphic information and network application / home networkingfamiliarity. Next, he was given a constrained period of time to fa-miliarize himself with the study setup. He then proceeded to thefirst application task. He was given a written description and givena constrained period to read it. Then the task would begin. As heworked on the task, each cross traffic scenario would be applied fora fixed period of time. At the end of this period, the user was beprompted for his satisfaction level. This process repeated for theother two application tasks. A final debriefing then occurred.

    3.5 ResultsFigure 5 presents Box plots of the prompted user satisfaction,

    one graph for each application, with the graphs broken down fur-ther by cross-traffic scenario. The bold lines indicate the medians,while the boxes extend from the 25th to 75th percentiles.

    It is abundantly clear that the data shows that for every applica-tion/scenario pairing, there exists substantial variation in satisfac-tion across users. Indeed, this variation for the most part swampsthe aggregate differences between the scenarios and across appli-cations. The video application (Figure 5(c)) shows the least per-scenario variation, but it is still clearly dominant.

    One question the reader may have is whether we are observingthe effects of different performance induced by network character-istics on the broader Internet that are not under our control. Thisis not the case. We measure the response time at the client sideand at the network side (from the proxys request to the ultimateweb server). In Figure 6, we present the effect of our cross traffic

    on the response time of HTTP transfers for both the Wikipedia andGoogle Image Labeler tasks as well as the packet loss of the stream-ing video. Note that in Figure 6(a) and Figure 6(b), there is con-siderable variation in the average response time under the scenar-ios with the most network contention. However, we see very littleresponse time variation in the scenarios with the least contention,even though considerable variation in user satisfaction remains. Inthe case of the streaming video, shown in Figure 6(c), several sce-narios resulted in substantial packet loss and subsequently low lev-els of user satisfaction. However, the scenarios with the least con-tention still produced highly variable satisfaction ratings. This in-

  • 8/6/2019 Tech Report NWU-EECS-09-05

    7/14

    (a) Wikipedia (b) Image Labeler (c) Video

    Figure 5: Prompted user satisfaction under different cross-traffic scenarios sorted by increasing median satisfaction. Notice the

    considerable variation for each scenario, which often swamps aggregate differences between scenarios.

    (a) Wikipedia (b) Image Labeler (c) Video

    Figure 6: The effect of our cross traffic on the user experience under different scenarios sorted by increasing median satisfaction.

    Average and variation in page request latency are given in (a) and (b) while average and variation in video packet loss is given in (c).

    Notice that for both Wikipedia (a) and the Image Labeler (b) tasks, the scenarios impacting satisfaction the least introduced very

    little average and variation in latency. These scenarios still exhibit considerable variation in user satisfaction. A similar trend can be

    seen in the packet loss rate of the streaming video (c), though the variation in the of cross traffic is less pronounced.

    dicates two things. First, the vast majority of application variation

    is under our control. Second, the variation in satisfaction cannotbe attributed to variation in application-level performance. Clearly,different users react differently to particular application-level QoSlevel.

    Our study shows that users have a wide range of expectationsof application-level performance. This implies that an optimal net-work provisioning strategythat is, one that maximized the aggre-gate user satisfactionis unlikely to be an even or static allocationof network resources.

    4. EmNet SYSTEM DESIGNBased on the results from our initial study we designed a system,

    EmNet, that is capable of shaping network traffic based on individ-

    ual user satisfaction. We target EmNet specifically at web-basedapplications, but believe that it can easily be extended to other ap-plication environments. The essential idea in EmNet is that the useris presented with a throttle overlay that also notes the cost of thecurrent throttle setting. The user can change the throttle setting atany point. The throttle setting is then an input to the link provision-ing algorithm running on the edge router. The throttle setting canbe associated with a set of one or more flows. Our link provision-ing algorithm uses the throttle inputs of the different flow sets, andnetwork-wide parameters, to derive weights for WFQ schedulingthe Internet link.

    Traffic

    Filter

    UserProxy Server

    Policy Controller

    Network Control

    Mechanisms

    WFQ

    Traffic

    Filter

    WFQ

    Filter Rules

    Traffic

    WeightsUserSatisfaction

    Provisioning

    Algorithm

    InterfaceInjection

    Satisfaction

    Interface

    Web Browser

    Other Network Hosts

    Background

    Traffic

    Broadband Router

    Figure 7: The EmNet Architecture

    4.1 ArchitectureEmNet is designed to be fully implemented inside a commodity

    broadband router. Its architecture is shown in Figure 7. Conceptu-ally, the system can be separated into four components: (1) a usersatisfaction sensor (the throttle in our implementation); (2) a proxyserver that injects the user interface into the user experience (the

  • 8/6/2019 Tech Report NWU-EECS-09-05

    8/14

  • 8/6/2019 Tech Report NWU-EECS-09-05

    9/14

    PerfMAX, and set the minimum scale value to 0, so that for everyFlowSet, 0 PerfFlowSet PerfMAX.

    The first part of our algorithm splits the link bandwidth into twocomponents. One component is reserved and statically allocatedbetween the background traffic and each FlowSet, while the othercomponent is dynamically allocated. We denote the static compo-nent as StaticBW and the dynamic component as DynBW. Thetotal link bandwidth we denote as BWTotal. We denote the ratioused to split BWTotal as the SplitRatio. Its value is constant and

    set such that 0 SplitRatio 1.The amount of bandwidth reserved for each component is thus

    given as:

    StaticBW = BWTotal SplitRatio

    DynBW = BWTotal (1 SplitRatio)

    TheStaticBWandDynBW components are then segmented equallybetween the background traffic and each FlowSet. The size of eachsegment is given as:

    StaticBWSeg =StaticBW

    N+ 1

    DynBWSeg =DynBW

    N+ 1

    StaticBWSeg is then immediately allocated to the backgroundtraffic and to each FlowSet. These static allocations representthe minimum amount of bandwidth guaranteed to each group, anddo not change. Initially, DynBWSeg is allocated to the back-ground traffic and to each FlowSet as well, however these allo-cations are not guaranteed. As the algorithm progresses, the alloca-tion ofDynBW is constantly recomputed based on thePerfFlowSetvalues.DynBW is allocated depending on all of the PerfFlowSet val-

    ues. The algorithm calculates the DynBW allocations such thatif every PerfFlowSet value was set to PerfMAX (i.e. as if eachFlowSetdemanded maximum performance) then 100% ofDynBWwould be allocated to them, with the background traffic receiv-ing only its minimum guaranteed bandwidth. Conversely, if ev-

    ery FlowSetminimized their PerfFlowSet values (PerfFlowSet =0) then the background traffic would receive 100% ofDynBW.The initial calculated values ofDynBWSeg thus represent the casewhere everyFlowSetsets its PerfFlowSetvalue to 0.5PerfMAX.In general DynBW is allocated using:

    DynBWFlowSet = DynBWPerfFlowSetPN

    0PerfMAX

    This formula ensures that the DynBW is allocated amongst theFlowSets as a function of the their PerfFlowSet values, while en-suring that no single FlowSet can monopolize the network link.This method of allocation provides a given FlowSet with a seg-ment of the DynBWbounded by themaximum value ofPerfFlowSetit is allowed to set, and determined by the PerfFlowSet value. Be-

    cause PerfFlowSet can be set to zero, the minimum bandwidthavailable to either a FlowSet or the background traffic is givenby StaticBWSeg. Also, the maximum bandwidth available to eachFlowSet is given by:

    MaxBWFlowSet = StaticBWSeg + (1 +1

    N)DynBWSeg

    Themaximum possible bandwidth allocated to thebackground traf-fic is given by:

    MaxBWbackground = StaticBWSeg + DynBW

    In general this algorithm gives each FlowSet control over anequal portion of DynBW that it can either use itself or give tothe background traffic. A PerfFlowSet value of 5 out of 10 wouldmean that the FlowSet is giving half its dynamic bandwidth to thebackground traffic. However FlowSets are not allowed to allocateanother FlowSets share ofDynBW. An alternative view is thatthe link is allocated equally at first, but each FlowSethas theabilityto steal its share of bandwidth from the background traffic.

    The result generated by the algorithm is a set of values equal to

    portions of the total bandwidth. The size of the output set is N+ 1,one value for each FlowSet plus one for the background traffic.Taken together the output set represents the allocated percentageof total bandwidth for each FlowSet as well as the backgroundtraffic. The output value for a given FlowSet is represented byOutputFlowSet, while the output for the background traffic is givenby Outputbackground:

    OutputFlowSet =DynBWFlowSet + StaticBWSeg

    BWTotal

    Outputbackground = 1 NX

    FlowSet=0

    OutputFlowSet

    While enforcing network partitioning is a valid method for en-

    suring fair bandwidth allocation under contention, it is suboptimalin situations where network contention is not occurring. In the caseof bursty interactive traffic it becomes wasteful to permanently re-serve a portion of the bandwidth for each flow. For this reason weimplemented our provisioning algorithm on top of weighted fairqueuing (WFQ). WFQ allows network fairness to be protected inworst case scenarios when there is contention, while not placing anupper limit on the amount of available bandwidth when there is nocontention. We discuss our use of WFQ in Section 4.5.

    4.4.3 Cost function

    While our control algorithm prevents any single network userfrom monopolizing the link bandwidth for themselves, it does noth-ing to prevent users from being dishonest in reporting their satisfac-tion and thus maximizing their PerfFlowSet values. Ideally, therewould be a mechanism which can accurately detect a users truesatisfaction. In practice, we rely on an outside force that providesan incentive for the user to be honest and exerts a downward pres-sure on the users satisfaction rating and performance value. Thisforce takes the form of a performance cost function. By associatinga given cost to the performance that a user requests, our system canencourage users to be honest in their satisfaction rating.

    We do not mandate a particular cost function, we only requirethat it encourage users to be honest in their allocation settings. Thechoice of cost function is dependent on the local policy of the net-work using EmNet . In the context of the home network, this mighttake the form of a head of household setting per-user limits on net-work usage with access incurring a cost that grows in proportion tothe weight. Perhaps no cost function is required if social pressure

    is an adequate mediator. A public wireless network may impose afinancial cost on usage, once again scaled according to the weightselection.

    4.5 Network control mechanismThe network control mechanismuses theresults of both theproxy

    server as well as the policy controller to configure a network sched-uler. EmNet uses weighted fair queuing as the mechanism for con-trolling network performance. All incoming and outgoing trafficis placed into the specific queues and then processed according tothe queues configured weights. By modifying the queue weights

  • 8/6/2019 Tech Report NWU-EECS-09-05

    10/14

    and altering the queues that a network connection uses, the net-work controller is able to optimize specific connections or groupsof connections.

    The inputs to thenetwork control mechanismare the descriptionsof the FlowSets provided by the proxy server, and OutputFlowSetratios from the policy controller. The FlowSet descriptions areused to generate packet classifiers that are capable of routing trafficbelonging to a given FlowSet into that FlowSets queue. The out-put from the policy controller is used to configure the appropriate

    weights for each FlowSets queue.The queue weights are set by transforming the output ratios from

    the policy controller (the percentages of the network bandwidthavailable to each FlowSetas well as the background traffic) into aset of weights that are assigned to the appropriate queue. The trans-formation is done by simply choosing a maximum weight value,W, that will represent 100% of the link bandwidth. W is thendivided between the FlowSets and the background traffic in thesame proportions as the bandwidth ratios given by the policy con-troller. Because the partitioning of the link is implemented usingweighted fair queuing, anytime a FlowSetis not sending or receiv-ing network traffic its queue is simply ignored. This has the effectof dropping that weight from the overall fairness calculation result-ing in a larger bandwidth share for the remaining FlowSets as well

    as the background traffic.As an example, if EmNet was running with 2 FlowSets, eachwith their performance values set to the half ofPerfMAX, the linkwould be equally partitioned into thirds, one third each for eachFlowSetand the background traffic. However if one FlowSetwasnot sending traffic, weighted fair queuing would ignore its queueand process packets from the remaining two queues in proportionto the given weights. This would result in the other FlowSet andthe background traffic each getting half of the network bandwidth.If the silent FlowSetbegan sending network traffic the allocationswould return to one third each.

    Implementing bandwidth partitions on top of WFQ allows ourprovisioning algorithm to aggressively impose limits on networkbandwidth without worrying whether those limits will cause un-derutilization of the available bandwidth.

    4.6 Prototype implementationIn order to study the feasibility and effectiveness of EmNet, we

    built a prototype system that could be evaluated on our testbed (de-scribed in detail in Section 3.2). Much of our implementation usedcomponents developed during the initial user study, and was de-ployed on several server machines in the testbed. As a result, ourimplementation is not a monolithic system running inside an emu-lated broadband router but a distributed system with components atvarious locations in the network. There are no technical obstaclesto building a system that runs on a commodity broadband router.

    4.6.1 User interface

    The EmNet implementation does not directly measure user sat-

    isfaction, but instead relies on the user to choose a preferred per-formance value, PerfFlowSet, based on their satisfaction. We col-lect this value from the user by using techniques discussed in Sec-tion 3.1. As shown in Figure 8, a slider control is added to everyweb page by the proxy. By setting the slider to a given value theuser directly indicates their PerfFlowSet. The sliders scale is lin-ear with the range 0 to 10, and the initial slider value was set to themiddle (5). The interface also includes a display of the current costvalue computed by the policy controller, displayed slightly abovethe slider. The slider setting is sampled every second and the costis updated every 3 seconds.

    Figure 8: User interface: The slider overlay (right side of

    screen) hovers over a web page. Users are able to move the

    slider to reflect the amount of performance they would like to

    receive. The price is shown.

    4.6.2 Cost function

    We implement a simple cost function that accumulates a priceas the user makes use of the bottleneck link. The user effectivelypays for each byte being transferred. We created a simple serverapplication that monitors all traffic associated with each FlowSet,counting the number of bytes transferred. This value is multipliedby the current PerfFlowSet valuethe higher the users perfor-mance value, the dearer each byte is. Because the cost is propor-tional to the amount of data transferred, this tends to bias applica-tions that consume more bandwidth and thus negatively effect ourevaluation of EmNet . To limit the effects of this, EmNet multi-plies the derived cost by a scaling factor. We tuned this parameterfor each application such that the total cost of using the applicationduring the study was within some range salient to the participants(around $10).

    4.6.3 Proxy serverThe EmNet implementation uses the proxy server implementa-

    tion discussed in Section 3.1. The proxy is still responsible forinjecting the interface into web requests and tracking network con-nections from the users browser, as well as receiving measure-ments from the user interface. We extended the proxy to calculatethe appropriate values for the cost function and send those valuesto the user. We also implemented the policy controller in Pythoninside the proxy server. The proxy server continues to operate ona separate physical machine connected the testbeds internal net-work. The policy controller implements the provisioning algorithmexplained in Section 4.4.

    4.6.4 Network controller

    The network controller was implemented on top of a FreeBSDServer running DummyNET. The server was configured with anEthernet bridge that matched the performance of a typical homenetwork (3Mbps down, 796Kbps up). We used the FreeBSD WeightedFair Queue implementation and relied on the kernel firewall to han-dle packet classification. Our configuration used a pair of rate lim-ited upload and download pipes that were fed by multiple WFQs.Each FreeBSD WFQ corresponded to one WFQ from the EmNetdesign.

    Running on the bridge machine is a network control server writ-ten in Perl that was responsible for handling control requests com-

  • 8/6/2019 Tech Report NWU-EECS-09-05

    11/14

    ing from the proxy server. The network control server applied thepolicy configuration given by the proxy using the ipfw commandline toolset. The commands sent to the network control serverconsist of a tuple srcIP, srcport, dstIP, dstport that can spec-ify one or many connections, and a weight that is to be applied totraffic matching the given tuple. The network controller supportswildcard matching for each field in the tuple.

    The proxy server is responsible for generating tuples that matchthe packets belonging to each FlowSet and calculating the appro-

    priate weights to apply. The network control server translates thetuples into packet classification rules used by the firewall to routepackets to the correct WFQ configured with the given weight. Thenetwork control server creates two WFQs with identical weightsfor each tuple, connecting one to the upload pipe and the other tothe download pipe. An inverted packet classification rule is appliedto the WFQ attached to the upload pipe to match a connectionsoutbound packets.

    5. EVALUATIONTo evaluate our EmNet prototype we conducted a second user

    study with a different set of participants. The participants repeatedthe same tasks as in the study of Section 3, but they were able to useEmNet to change their network performance. Our goal was to de-termine if users are able to use EmNet to increase their satisfactionwith the network performance.

    5.1 Study designThe study we conducted to evaluate EmNet was a modified ver-

    sion of the previous study, described in detail in Section 3. We nowsummarize the differences.

    Instrumentation. The EmNet instrumentation, described in Sec-tion 4, was used, in addition to the prompted user satisfaction in-strumentation of the original study.

    Testbed. The testbed of the previous study, described in Sec-tion 3.2 was reused, but with EmNet running on it. This implies

    a significant change to the data path, in that the bridge server nowruns FreeBSD and DummyNet, instead of Linux and IPRoute2.

    Users, applications, and scenarios. The study was doneusing a new set of 18 participants drawn from the same universitypopulation as the initial study, using the same recruitment methods.The 18 participants consisted of 11 women and 5 men (once again,1 participant chose not to answer the demographic questions). Eachparticipant was given the same set of survey questions used in thefirst study. 16 participants reported experience using a home net-work. The per-application familiarity and usage questions yieldedresults similar to those of first study (see Section 3.3) so we omitthem here.

    The application tasks were the same as in the previous study.

    A subset of the cross-traffic scenarios from the earlier study waschosen. These are summarized in Figure 9. The subset was chosento contain those that had significant effects on median user satis-faction in the first study. A placebo scenario (no cross-traffic) wasalso included.

    EmNet configuration. The EmNet implementation requires thesetting of global parameters, as described in Section 4.4. The val-ues used for the study are given in Figure 10. Additionally, theinitial performance value PerfFlowSet was set to 0.5 PerfMAX.This is the middle of the scale shown to the user. By forcing the

    No. ofName Direction Flows Bandwidth

    Up.4 Upload 1 768KbUp.7 Upload 4 500Kb/con

    Down.4 Download 1 3MbDown.7 Download 4 1Mb/con

    Mixed.2 D ownload 8 1 Mb/con,Upload 8 200Kb/con

    Figure 9: The subset of network scenarios used for the seconduser study.

    Parameter Value

    SplitRatio 0.10

    PerfMAX 10

    N 1

    W 100

    Figure 10: Provisioning algorithm parameters used by the Em-

    Net Prototype in the second user study.

    performance value to stay at this position, we can also consider astatic WFQ case.

    Study design. The overall study design was identical to that de-scribed in Section 3.4, with several differences, summarized here.

    In addition to the prompt for satisfaction after each network sce-nario, users were also given the slider interface. Second, each userwas exposed to a given network scenario for 60 seconds (as op-posed to 30 seconds in the earlier study). The purpose was to givethem time to find a satisfactory setting of the EmNet slider control.

    The users were told that moving the slider up was used to in-crease performance while moving it down decreased performance.To provide an incentive for the users to move the slider down thecost penalty was described to them, and the cost display was al-ways visible as part of the interface. The users were given a goal ofminimizing cost while keeping network performance at a level thatthey were satisfied with.

    Every user experienced every network scenario twice; once wherethe slider control actually controlled his network performance, andonce where the slider control did nothing. For the latter cases, ahard-coded PerfFlowSet value of 5 was used in the policy con-troller, as described earlier. This resulted in a network controllerwith a statically configured WFQ for these cases.

    For each user, a random ordering of application tasks, a randomordering of network scenarios within each task, and a random or-dering of static WFQ and dynamic WFQ (EmNet) cases were used.

    5.2 ResultsWe consider the bandwidth used, the prompted user satisfaction,

    and the users chosen performance values. Figure 11 shows the av-erage download bandwidth used by the applications with EmNetand static WFQ, grouped by scenario. The whiskers represent theoverall standard deviation, not the confidence interval for the mean.Figure 12 shows the average user satisfaction, and the standard de-viation, for EmNet, static WFQ, and for the earlier observationalstudy. Finally, Figure 13 shows the distribution of the performancevalues (PerfFlowSet) chosen by the users.

  • 8/6/2019 Tech Report NWU-EECS-09-05

    12/14

    WFQ

    EmNet

    WFQ

    EmNet

    WFQ

    EmNet

    (a) Wikipedia (b) Image Labeler (c) Video

    Figure 11: Average downstream throughput under each scenario and system.

    Observation

    WFQ

    EmNet

    Observation

    WFQ

    EmNet

    Observation

    WFQ

    EmNet

    (a) Wikipedia (b) Image Labeler (c) Video

    Figure 12: Average satisfaction level under each scenario and system.

    Caveats. Our studies are the first large scale user studies of theirkind. Although data points such as ours are extremely expensiveto get in terms of funds and manpower, it is important to note thatwe will examine what amounts to about 20 data points per applica-tion/scenario/network control configuration triplet. Further, com-parisons between the observational study and the evaluation study

    should take into consideration their differences, which are summa-rized in Section 5.1.

    5.2.1 Wikipedia

    In terms of bandwidth consumed, the static WFQ and EmNet re-sults are similar. As seen in Figure 11(a), for both, Wikipedia con-sumed a moderate amount of bandwidth, but with high variabilityacross the users. The variation in application bandwidth that wesee in scenarios where there is significant contention for downloadbandwidth is generally slightly higher with EmNet than with staticWFQ.

    Figure 12(a) shows the prompted user satisfaction numbers forWikipedia. Note that unlike the bandwidth numbers, here we alsocompare with the observational study results. The overall shape of

    the results are similar across the cross-traffic scenarios. The aver-age satisfaction levels for static WFQ and EmNet are comparable,and both are much higher than those in the observational study.On average, static WFQ and EmNet users are 33% more sat-isfied than the observation study participants. For scenarios withcross traffic in the download direction, EmNet shows a very slightaverage drop in satisfaction compared to static WFQ, which corre-sponds to the drop in the average bandwidth seen in Figure 11(a).

    When using EmNet, many users did not change their setting fromthe default, but those that did increased it, and presumably reapedhigher satisfaction. Figure 13(a) shows the distribution of perfor-

    mance values (PerfFlowSet) that users chose for each cross-trafficscenario. For each scenario the median value was very nearly 5,which would result in performance very near to that of static WFQ.However, the distribution of the performance values shows consid-erable variation, where almost all variation is concentrated abovethe median.

    Taken together the results show that for most people, the additionof network control, such as static WFQ or EmNet, dramatically in-creases satisfaction with the performance of Wikipedia. With suchcontrol, cross traffic has little effect on the satisfaction. The aver-age bandwidth consumed by Wikipedia under network control islargely independent of the amount of cross traffic, suggesting thatthe necessary bandwidth was almost always available to the user.This in part explains the similarity between the static WFQ andEmNet satisfaction results.

    5.2.2 Image labeler

    Figure 11(b) shows that the required download bandwidth wassmall. Furthermore while the bandwidth variability is not insignif-icant it is generally much less than in Wikipedia. Because the

    game is interactive this suggests that performance was stronglydependent on latency rather than available bandwidth. Like theWikipedia results, the average download bandwidths for static WFQand EmNet change little across different cross-traffic scenarios.

    Average satisfaction increases dramatically going from the ob-servational study to static WFQ, and then slightly again going toEmNet. Furthermore, the variation in satisfaction decreases signif-icantly. The satisfaction measurements are given in Figure 12(b).Both WFQ and EmNet increase average satisfaction on the order of 27% compared to the observation study. It is important to pointout, however, that due to the very small demand for bandwidth and

  • 8/6/2019 Tech Report NWU-EECS-09-05

    13/14

    (a) Wikipedia (b) Image Labeler (c) Video

    Figure 13: Variation in user-selected performance values (PerfFlowSet) under EmNet for different cross-traffic scenarios.

    the significant amount of interaction required, static WFQ and Em-Net are likely improving satisfaction largely through providing la-tency bounds.

    Compared to Wikipedia or Video, users of Image Labeler arefar more likely to decrease their PerfFlowSet values. The distribu-tions ofPerfFlowSet values are shown in Figure 13(b). Given thelow bandwidth demands of Image Labeler, adequate performance

    can be had at a lower setting. However, the median is 5we hadexpected users would choose even lower PerfFlowSet values.

    Intuitively, highly interactive low bandwidth applications can de-liver very high satisfaction when provided with guaranteed latencybounds. Both static WFQ and EmNet provide this, while the con-figuration of the observation study does not. Additionally, the re-sults also show that EmNet provides a consistent if small increasein average satisfaction over the static WFQ configuration.

    5.2.3 Video

    Unlike Wikipedia and Image Labeler, the Video task uses a largeportion of the links download bandwidth. The bandwidth for Videois given in Figure 11(c). Unlike the earlier tasks, the video streambandwidth was significantly different across the various cross-trafficscenarios, while at the same time showing much less variance inrelative terms. Because the video stream was sent over UDP it wasless susceptible to upload cross traffic. Both static WFQ and Em-Net show similar bandwidth when there is no contention on thedownload link, but with contention EmNet provides considerablymore bandwidth to Video than static WFQ does. With downloadcontention, EmNet provides 25% more bandwidth than staticWFQ. That is, the users are able to demand more bandwidth forVideo with EmNet.

    Video shows some of the most dramatic differences in satisfac-tion and the largest differences between static WFQ and EmNet.Figure 12 shows the satisfaction results. Here, the results for theobservational study are almost all better than for both static WFQand EmNet. This is due to the interaction of the videos UDP trafficwith the cross traffic and the network control system. In the ob-

    servation study the UDP-based video stream was free to competewith the TCP cross traffic, resulting in the congestion control algo-rithm being activated for the TCP flows. This effectively chokedoff the cross traffic while the video stream monopolized the linkbandwidth. With static WFQ the video stream was limited to 50%of the link bandwidth, which resulted in a very large decrease in thevideo quality that was immediately noticeable to users.

    Users report much higher satisfaction with EmNet than with staticWFQ. With EmNet users are able to increase the bandwidth Videogets, resulting in large increases in satisfaction. Considering thesatisfaction results (Figure 12(c)) in light of the the application

    bandwidth results (Figure 11(c)). A large increase in satisfaction ispurchased with only a tiny increase in application bandwidth. Notethat it is probably not possible for EmNet to achieve the same sat-isfaction results as in the observational study, nor should it be thecaseperformance there was due to choking the competing TCPflows.

    Figure 13 shows the distributions of the users PerfFlowSet val-

    ues. As we might expect, most users moved the slider up to im-prove performancethe medians are all > 5. Further, the medianPerfFlowSet values for the cross-traffic scenarios with downloadcontention were the highest. It is interesting to note that, however,that even when there is no cross traffic, the median value remainshigh, which is unexpected, which could be an artifact of the smallnumber of data points.

    Video was unique among the three tasks by the nature of its largebandwidth usage that did not vary much between users. This meantthat EmNet would often throttle Videos bandwidth. Thus, the userwas in the position of having to increase his PerfFlowSet valuefor adequate Video performance. He was then forced to carefullychoose a balance between cost and performance, namely to findthe lowest PerfFlowSet value that was acceptable. While the largevariation in user satisfaction shows that there were mixed results inthe users ability to do this, it is also clear that EmNet does a better

    job of optimizing for user satisfaction than static WFQ. This is bestseen in the scenarios with download contention, where a 44% and37% increase in bandwidth resulted in a 268% and 267% increasein user satisfaction respectively.

    5.2.4 General Results

    Comparing all three network control configurations (None (ob-servational study), WFQ, and EmNet) across both studies and allcross-traffic scenarios, our results are that EmNet increases aver-age user satisfaction by 24% compared to a configuration withoutnetwork control. Further, EmNet increases average satisfaction by19% compared to static WFQ. Finally, EmNet achieves these in-creases in satisfaction with only a 6% increase in application band-

    width compared to static WFQ.Interestingly, we find that, overall, users are reluctant to decrease

    their PerfFlowSet value once it has been raised. While the per-bytecost displayed in the user interface of EmNet is intended to encour-age users to decrease their setting as circumstances permit, it is notclear that this cost was a powerful enough incentive. Of course, inthe study, the cost reflect no real world monetary value. We spec-ulate that a real cost such as would be expected in a deploymentwould do better. Alternatively, pressure from other users in thehome network could act as an additional cost function. We hope toexplore both of these in our future work.

  • 8/6/2019 Tech Report NWU-EECS-09-05

    14/14

    6. CONCLUSIONSWe introduced a new method of optimizing home network broad-

    band connections by using individual user satisfaction as an in-put. We conducted an observational user study that demonstratedthat user satisfaction shows a high degree of variance, meaningthat each users perception of network performance is very dif-ferent. Optimizing for a canonical user is not sensible. We de-signed and implemented EmNet, a system that optimizes a homenetwork broadband connection based on measurements of individ-ual user satisfaction. We evaluated EmNet in a second user studythat demonstrated that individualized optimizations can consider-ably improve user satisfaction with low resource cost. On averageEmNet is capable of increasing user satisfaction by 24% over anuncontrolled link, and by 19% over a simple static configurationusing weighted fair queuing. It does so by increasing average ap-plication bandwidth by only about 6%.

    7. REFERENCES

    [1] ATHANASOPOULOS , E., KARAGIANNIS, T., GKANTSIDIS ,C. , AN D KEY, P. Homemaestro: Order from chaos in homenetworks. In Proceedings of ACM SIGCOMM (demo)(August 2008). Also see Microsoft Research Technical

    Report MSR-TR-2008-84.[2] BENNETT, J., AN D ZHANG, H. Worst-case fair weighted

    fair queueing. In Proceedings of IEEE INFOCOM 1996(March 1996).

    [3] BLO M, H. Modified iperf implementation. (web page).http://staff.science.uva.nl/ jblom/gigaport/tools/test%5Ftools.html.

    [4] CLAYPOOL, M., KINICKI, R., LI, M., NICHOLS, J., AN DWU, H. Inferring queue sizes in access networks by activemeasurement. In Proceedings of the 4th Passive and Active

    Measurement Workshop (PAM) (April 2004).

    [5] CONSUMER ELECTRONICS ASSOCIATION. CEA findsamerican adults spend $1,200 annually on consumerelectronics products, April 2007.

    [6] DINDA, P., MEMIK, G., DIC K, R., LIN , B., MALLIK, A.,

    GUPTA, A., AN D ROSSOFF, S. The user in experimentalcomputer systems research. In Proceedings of the Workshopon Experimental Computer Science (ExpCS 2007) (June2007).

    [7] DISCHINGER , M., HAEBERLEN , A., GUMMADI , K. P.,AN D SAROIU, S. Characterizing residential broadbandnetworks. In Proceedings of the 7th ACM SIGCOMM /USENIX Internet Measurement Conference (IMC) (October2007).

    [8] GOOGLE, INC . Google image labeler. (web page).http://images.google.com/imagelabeler/.

    [9] GUPTA, A., LIN , B., AN D DINDA, P. A. Measuring andunderstanding user comfort with resource borrowing. InProceedings of the 13th IEEE International Symposium on

    High Performance Distributed Computing (HPDC 2004)(June 2004).

    [10] HORRIGAN, J. B. Home Broadband Adoption 2008. PewInternet and American Life Project, 2008.

    [11] KARAGIANNIS, T., ATHANASOPOULOS , E., GKANTSIDIS ,C. , AN D KEY, P. Homemaestro: Order from chaos in homenetworks. Tech. Rep. MSR-TR-2008-84, MicrosoftResearch, 2008.

    [12] KICIMAN, E., AN D LIVSHITS, B. Ajaxscope: a platform forremotely monitoring the client-side behavior of web 2.0applications. In Proceedings of the 21st ACM SIGOPS

    Symposium on Operating Systems Principles (SOSP)

    (October 2007).

    [13] LAKSHMINARAYANAN , K., AN D PADMANABHAN , V. N.Some findings on the network performance of broadbandhosts. In Proceedings of the 3rd ACM SIGCOMM / USENIX

    Internet Measurement Conference (IMC) (October 2003).

    [14] LANGE, J., DINDA, P., AN D ROSSOFF, S. Experiences withclient-based speculative remote display. In Proceedings ofthe USENIX Annual Technical Conference (USENIX) (2008).

    [15] LANGE, J., MILLER, J. S., AN D DINDA, P. EmNet:Satisfying the individual user through empathic homenetworks: Summary. In Proceedings of the ACMSIGMETRICS International Conference on Measurement

    and Modeling of Computer Systems (SIGMETRICS) (June2009).

    [16] LIN , B., AN D DINDA, P. Vsched: Mixing batch andinteractive virtual machines using periodic real-timescheduling. In Proceedings of ACM/IEEE SC(Supercomputing) (November 2005).

    [17] LIN , B., MALLIK, A., DINDA, P., MEMIK, G., AN D DIC K,R. Power reduction through measurement and modeling ofusers and cpus: Summary. In Proceedings of the ACMSIGMETRICS International Conference on Measurement

    and Modeling of Computer Systems (SIGMETRICS) (June2007). Extended version appears as Northwestern EECStechnical report NWU-EECS-06-11.

    [18] MALLIK, A., COSGROVE, J., DIC K, R., MEMIK, G., AN DDINDA, P. Picsel: Measuring user-perceived performance tocontrol dynamic frequency scaling. In Proceedings of the13th International Conference in Architectural Support for

    Programming Languages and Operating Systems (ASPLOS)

    (March 2008).

    [19] MALLIK, A., LIN , B., MEMIK, G., DINDA, P., AN D DIC K,R. User-driven frequency scaling. Computer Architecture

    Letters 5, 2 (2006).

    [20] NATIONAL LABORATORY F OR APPLIED NETWORKRESEARCH. Iperf network performance measurement tool.

    (web page). http://iperf.sourceforge.net/.[21] SHY E, A., OZISIKYILMAZ , B., MALLIK, A., MEMIK, G.,

    DINDA, P., AN D DIC K, R. Learning and leveraging therelationship between architectural-level measurements andindividual user satisfaction. In Proceedings of the 35th

    International Symposium on Computer Architecture (ISCA)

    (June 2008).

    [22] SHY E, A., PAN , Y., SCHOLBROCK , B., MILLER, J. S.,MEMIK, G., DINDA, P., AN D DIC K, R. Power to thepeople: Leveraging human physiological traits to controlmicroprocessor frequency. In Proceedings of the 41st Annual

    IEEE/ACM International Symposium on Microarchitecture

    (MICRO) (November 2008).

    [23] STILIADIS, D., AN D VARMA, A. Latency-rate servers: Ageneral model for analysis of traffic scheduling algorithms.

    IEEE/ACM Transactions on Networking (ToN) 6, 5 (1998),611624.

    [24] TU, C.-C., CHE N, K.-T., CHANG, Y.-C., AN D LEI , C.-L.Oneclick: A framework for capturing users networkexperiences. In Proceedings of ACM SIGCOMM 2008(poster) (August 2008).

    [25] VIDEOLAN TEAM. Vlc media player. (web page).http://www.videolan.org/vlc/.

    [26] WIK IMEDIA FOUNDATION , INC . Wikipedia, the freeencyclopedia. (web page). http://en.wikipedia.org/.