Sei sulla pagina 1di 27

JOURNAL OF INFORMATION POLICY 3 (2013): 304-330.

THE IMPACT OF CONTENT DELIVERY NETWORKS


ON THE INTERNET ECOSYSTEM
BY MANUEL PALACIN, MIQUEL OLIVER, JORGE INFANTE,
A B C

SIMON OECHSNER, D AND ALEX BIKFALVI E

Are Tier 1 ISPs and “hyper-giant” content providers using preferential


interconnection agreements to create Content Delivery Networks (CDNs) that allow
them to provide an improved Quality of User Experience inconsistent with network
neutrality principles? Yes, say the authors, and they recommend that regulators
address these possibly oligopolistic and anti-competitive practices. The effects arising
from these CDNs, the authors show, are the same as those of regulated traffic
prioritization. Using innovative models, original data, and analysis based on the
experience of the Internet ecosystem in Spain, the authors conclude that lesser long
tail content providers without significant market power are being cannibalized into
such CDNs.

INTRODUCTION
Quality of Experience (QoE) 1 for end users is a key factor in the success of a new Internet service.
Consequently, Internet applications require new strategies for distributing their content while
offering the best user experience. Content Delivery Networks (CDNs) are technical solutions for
providing high-performance content distribution. CDNs mirror the most popular web content to a
set of distributed cache servers located at the edge of the network close to the end users. CDNs act
as bypasses around the network that attempt to redirect users to the most suitable server based on
performance criteria while avoiding saturated links. The main consequence of this bypass is that
Internet Service Providers (ISPs) and content providers (CPs) are efficiently connected, resulting in
better performance.

A Lecturer, Departament de Tecnologies de la Informació les Comunicacions, Universitat Pompeu Fabra Barcelona.
B Professor, Departament de Tecnologies de la Informació les Comunicacions, Universitat Pompeu Fabra Barcelona.
C Senior expert, Comisión del Mercado de las Telecomunicaciones, Spain.

D Visiting Professor, Departament de Tecnologies de la Informació les Comunicacions, Universitat Pompeu Fabra

Barcelona.
E Post-Doctoral Researcher, Departament de Tecnologies de la Informació les Comunicacions, Universitat Pompeu

Fabra Barcelona.

This work was partially supported by the Spanish government through the project CISNETS (TEC2012-32354).
1Quality of Experience is a subjective measure of a customer’s experiences with an Internet service. See Body of
European Regulators for Electronic Communications, “BEREC Guidelines for Quality of Service in the Scope of
Network Neutrality,” white paper BoR (12) 32, May 29, 2012, accessed June 17, 2013,
http://berec.europa.eu/files/news/bor_12_32_guidelines.pdf, 14.

304
VOL. 3 JOURNAL OF INFORMATION POLICY 305

In order to know how this scenario is evolving for the different Internet players, this article
addresses the following research questions: (1) What is the distance, in terms of operator hops,
among content providers and Access ISPs?; and (2) What are the content delivery strategies of large
and long tail content providers?

This article provides a quantitative study of the connectivity level of the web content ecosystem in
Spain. However, the study may be extrapolated to any other European country as network structures
and content are quite similar within the European Union. The aim of this work is to provide a new
methodology to research Internet topology and to empirically prove the extensive use of CDNs by a
certain type of actor. Based on the stated research questions, we divide the work into two main
contributions.

The first contribution consists of creating an interconnection model with the objective of observing
the connectivity level between the different players in the current Internet ecosystem (content
providers, CDNs, and Tier 1 and Access ISPs). Finding a high connectivity level will illustrate huge
changes in Internet interconnection. The Internet would have evolved into this new structure,
optimizing data transfer and making potential use of CDNs. However, this first contribution does
not prove the extensive use of CDNs – it is just a hint. To prove the real emergence of the CDNs, it
is necessary to go a step further and analyze where exactly Internet content is hosted. The second
contribution of this study analyzes in-depth the real location of Internet content to determine if it is
delivered using CDNs. To perform this task, we analyze a set of 43 websites from a list of the most
visited websites (e.g. Google, Yahoo, Microsoft, and Facebook), and a sample of long tail content
providers, to identify whether their URLs redirect to CDNs. Therefore, through the inspection of
web content it is possible to identify whether content providers of different sizes use different
content distribution strategies.

Other related works have previously analyzed how CDNs are changing the way content is
distributed. Authors like Clark et al. focus on the economic impact of CDNs on the interconnection
ecosystem and observe a high asymmetry between the traffic flows from content providers to end
users in comparison with the flows going from end users to content providers. This asymmetry has
inspired many disputes between CDNs and Access ISPs about their interconnection agreements,
which have led to a change of their terms. On one hand, Access ISPs are the destination of large
amounts of traffic originated by CDNs. Thus, they believe that they should demand fees for
delivering this traffic to end users using their network resources. On the other hand, CDNs realize
that this is the only way to reach end users because Access ISPs are in a strategic position. Clark et
al. discuss all these disputes in a network neutrality context and conclude that CDNs may pay Access
ISPs even though this could be considered a risk to competition, because they believe the market
will provide enough competitive transit prices to sustain the ecosystem. 2

2David D. Clark, William Lehr, and Steven Bauer, “Interconnection in the Internet: The Policy Challenge,” paper
presented at the Telecommunications Policy Research Conference, Arlington VA, Sept. 2011, accessed June 17, 2013,
http://groups.csail.mit.edu/ana/Publications/Interconnection_in_the_Internet_the_policy_challenge_tprc-2011.pdf.
VOL. 3 JOURNAL OF INFORMATION POLICY 306

Dimitropoulos et al. 3 and Gao 4 have also analyzed operator interconnections from a more technical
perspective. They use a methodology to quantify the type of inter-Autonomous System (AS) 5
relationships that exist in the Internet and classify them into three groups based on the state of
Border Gateway Patrol (BGP) 6 messages: customer-to-provider, peer-to-peer, and sibling-to-sibling
relationships. 7 They found that more than 90.5% of the relationships are customer-to-provider, less
than 8% are peering and less than 1.5% are sibling relationships. Other authors like Faloutsos et al. 8
have focused on generating a global Internet topology using a power law methodology. They have
obtained results about the percentage of nodes that a node can reach in each hop, demonstrating
that more than 99% of the Internet nodes can be reached within a maximum of six hops.

Shavitt and Weinsberg recently discussed the topological trends of content providers. They create a
snapshot of the AS-level graph from late 2006 until early 2011, and then analyze the interconnection
trends of the transit and content providers and their implications for the Internet ecosystem. AS
graphs are built by traversing IP traceroutes 9 and resolving each IP address to its corresponding AS.
Shavitt and Weinsberg proved that large content providers like Google, Yahoo!, Microsoft,
Facebook, and Amazon have increased their connectivity degree during the observed period and are
becoming key players in the Internet ecosystem, strengthening the idea that the Internet is becoming
flatter. 10

Regarding research method, this article analyzes interconnection relationships after modeling an
Internet graph with a subset of ASes. Using this model we can identify the interconnection
relationships and measure the current distance between ASes. The model uses the Dijkstra routing
algorithm 11 to generate Internet topologies and the results are validated using the traceroute tool.
The novelty of the methodology used here is that it is based on creating a connectivity map fed with
AS information using Dijkstra instead of BGP – the routing policies of which are not publicly

3 Xenofontas Dimitropoulos, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker, Young Hyun, kc claffy, and
George Riley, “AS Relationships: Inference and Validation,” SIGCOMM Computer Communications Review 37, no. 1 (2007):
31-40.
4 Lixin Gao, “On Inferring Autonomous System Relationships in the Internet,” IEEE/ACM Transactions on Networking 9,

no. 6 (2001): 733-745.


5 An Autonomous System (AS) is a collection of connected Internet Protocol (IP) routing prefixes under the control of

one or more network operators that presents a common, clearly defined routing policy to the Internet. See John
Hawkinson and Tony Bates, “Guidelines for Creation, Selection, and Registration of an Autonomous System (AS),”
technical standards document RFC1930, Mar. 1996, accessed June 17, 2013, http://tools.ietf.org/html/rfc1930.
6 Border Gateway Protocol (BGP) is an exterior routing protocol that makes routing decisions based on path, network

policies, and/or rule-sets. It is used to route traffic between different Autonomous Systems.
7 Sibling-to-sibling is a type of AS relationship in which two different ASes belong to the same ISP. For example,

Telefónica has more than one AS whose interconnections are under a sibling-to-sibling relationship.
8 Georgos Siganos, Michalis Faloutsos, and Christos Faloutsos, “The Evolution of the Internet: Topology and Routing,”

working paper (2001), accessed June 17, 2013, http://static.cs.ucr.edu/store/techreports/UCR-CS-2002-05065.pdf.


9 traceroute is a network diagnostic tool for displaying the route (path) and measuring transit delays of packets across an

IP network between two endpoints.


10 Yuval Shavitt and Udi Weinsberg, “Topological Trends of Internet Content Providers,” SIMPLEX ‘12: Proceedings of

the Fourth Annual Workshop on Simplifying Complex Networks for Practitioners (2012): 13-18.
11 Dijkstra is a graph search algorithm that solves the single-source shortest path problem for a graph with non-negative

edge path costs, producing the shortest path tree. Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, and
Clifford Stein, Introduction to Algorithms, 2nd ed. (Cambridge, MA: MIT Press, 2001), 595-601.
VOL. 3 JOURNAL OF INFORMATION POLICY 307

available due to ISP strategic decisions. Therefore, Dijkstra provides a simpler way to obtain an
approximation of the real interconnection relationships. In addition, this study uses the nslookup 12
tool to identify the exact location of content following the redirection URLs pointing to external
providers.

This article is structured as follows. In the next section we describe the architecture of the Internet.
Next, we present a classification of the different types of CDNs. The article continues by addressing
the first research question and describing the methodology used to generate the model and obtain
the results. The following section deals with the second research question and provides the results of
the inspection of content delivery strategies. Finally, the paper outlines the implications of CDNs in
the Internet marketplace. Whereas traffic prioritization raises questions of network neutrality, here
we argue that CDNs have a similar prioritization objective. Because CDNs accelerate some content
over others, regulators should address their impact on the welfare of the Internet ecosystem.

BYPASSING THE INTERNET


The Internet architecture implemented until the early 2000s was based on a multi-tier hierarchic
structure. 13 Tier 1 ISPs were on top of the hierarchy followed by the Tier 2 regional ISPs and the
Access ISPs at the lower part of the hierarchy connecting the end users. In this scheme, Tier 1 ISPs
were highly connected to other ISPs and offered transit services to other ISPs in lower layers.
Content was distributed through Access ISPs or, in the best cases, through ISPs located at
advantageous points. Traffic flows were required to go up and then down in the hierarchy to reach
end users (see Figure 1 below).

12 nslookup is a tool that queries the Domain Name System (DNS) to find a domain name, IP address, or related
information for selected Internet content.
13 Craig Labovitz, Scott Iekel-Johnson, Danny McPherson, Jon Oberheide, and Farnam Jahanian, “Internet Inter-

Domain Traffic,” SIGCOMM ’10: Proceedings of the ACM SIGCOMM 2010 Conference (2010): 75-86; Amogh Dhamdhere
and Constantine Dovrolis, “The Internet Is Flat: Modeling the Transition from a Transit Hierarchy to a Peering Mesh,”
Co-NEXT ’10: Proceedings of the 6th International Conference (2010): Article 21; Shavitt and Weinsberg.
VOL. 3 JOURNAL OF INFORMATION POLICY 308

Figure 1: Traditional Hierarchic Internet Structure. 14

Currently, Internet architecture is evolving and introducing new peering agreements, and most of
the ISPs have increased their connectivity level. CDNs have emerged and have modified the
interconnection paradigm as they move the servers from which end users download content closer
to them, bypassing transit networks for these connections. In addition, apart from the new CDNs,
some telco operators have also evolved and have upgraded their networks and systems to offer
CDN services (see Figure 2 below).

Figure 2: Current Internet Structure. 15

14 Adapted from Labovitz, Iekel-Johnson, McPherson, Oberheide, and Jahanian.


VOL. 3 JOURNAL OF INFORMATION POLICY 309

In Figure 1 we can observe that content hosted in Access ISP A must be transmitted up and then
down in the hierarchy passing through the different operators to reach the end users of Access ISP
D. In Figure 2, we can observe a different Internet topology in which content hosted on CDNs
“bypass the network” and easily reach the end users thanks to direct and faster connections. This
bypass means a breakthrough in network performance and, at the same time, it optimizes network
resources thanks to the use of cache servers near the end user. The content of a CDN does not need
to be transferred through the entire network each time an end user makes a request. CDNs cache
content close to the end users and deliver a cache copy without affecting the rest of the network.

Figure 3 below shows the difference in the network resource utilization between the traditional
hosting scheme and the CDN paradigm. The traditional scheme (1) requires transmitting a
datastream with the content for each user request, while in the CDN alternative (2) the CDN data
center (the CDN element that first receives the content from the content provider) transmits a
datastream to each cache server and then each user request is served by the cache server. This
comparison helps to illustrate the traffic savings in the backbone made possible by the CDN.
However, we note that this diagram is a simplification of a more complex scenario. (For example,
the content expiration that would require a new data replication from the CDN data center to the
cache servers is not considered here.)

Figure 3: Traditional Hosting vs. CDN Delivery.

15 Ibid.
VOL. 3 JOURNAL OF INFORMATION POLICY 310

THE TAXONOMY OF CONTENT DELIVERY NETWORKS


The term Content Delivery Network is a concept within network structure that can lead to different
interpretations. A content delivery network is defined as a technical solution that deploys many
servers in many distributed locations to offer the best QoE to the end user. However, this definition
is too generic and each provider implements its network solution in a different way. Huang et al. 16
identify two main types of CDN. On one hand, we find CDNs that build large content distribution
centers in few – but strategic – locations, and that connect these centers with high speed links to the
ISPs (see Figure 4 [a] below). In this strategy, CDNs try to place the distribution centers at vantage
points (VP) that are simultaneously close to many large ISPs. On the other hand, there exist CDNs
that deploy their cache servers inside the ISPs. Level3 is an example of the first type of CDN,
whereas Akamai is an example of the second type (see Figure 4 [b] below).

Figure 4: CDN Strategies.

Both types of CDN strategy have technical advantages and disadvantages. The first strategy has a
simpler management overhead, but at the expense of increasing the response time. The second
strategy has optimal performance in terms of response time due to its location very close to the end
user; however, the management and server deployment are more complex. CDNs must analyze what
is the most profitable strategy depending on the content: investing in distributed network equipment
and management, or investing in few vantage points and buying network capacity.

In addition to these two types, there are hybrid architectures that combine inside-ISP-server-
collocation with large CDN data centers, other more hierarchic structures that do not have direct
connection with the target Access ISP, or CDNs that manage caches using P2P structures. 17

16 Cheng Huang, Angela Wang, Jin Li, and Keith W. Ross, “Understanding Hybrid CDN-P2P: Why Limelight Needs Its
Own Red Swoosh,” NOSSDAV ’08: Proceedings of the 18th International Workshop on Network and Operating Systems Support for
Digital Audio and Video (2008): 75-80.
17 See Akamai NetSession Interface, accessed June 17, 2013, http://www.akamai.com/client.
VOL. 3 JOURNAL OF INFORMATION POLICY 311

Another emerging paradigm is the CDN federation, 18 i.e. the interconnection of different and
heterogeneous CDNs to obtain a greater worldwide presence assuring a predefined Service Level
Agreement (SLA).

Delivering Content

Technically, a CDN is formed by a content server network and a specialized Domain Name Sever
(DNS) network that redirects end users to the most appropriate content server based on a
sophisticated algorithm. The basic mechanism of a CDN consists of the following steps (see Figure
5 below):

• An end user queries its local DNS (LDNS) to resolve the IP address of a web
resource (e.g. http://images.example.com).
• The LDNS connects to the authoritative DNS server of “example.com”
which returns the canonical name (CNAME) “server1.cdn.com” in response.
• The LDNS connects to the authoritative DNS server of “server1.cdn.com”
which finally returns the IP address (usually, servers return two IPs to allow
client-side load balancing) of the most convenient CDN content cache server
that hosts the example.com content.

Figure 5: CDN DNS Redirection.

18Dan Rayburn, “EdgeCast Says CDN Federation Taking Hold, Details How Operators Are Exchanging Traffic,”
Streaming Media Blog, accessed June 17, 2013,
http://blog.streamingmedia.com/the_business_of_online_vi/2012/05/edgecast-says-cdn-federation-taking-hold-
details-how-operators-are-exchanging-traffic.html. See also EdgeCast, “Open CDN,” accessed June 17, 2013,
http://www.edgecast.com/solutions/opencdn/.
VOL. 3 JOURNAL OF INFORMATION POLICY 312

Basically, most CDNs follow the delivery procedure illustrated in Figure 5 above. However, each
CDN adds its own mechanisms to improve the performance of the content delivery. Some of these
mechanisms are based on deploying the content and DNS servers at strategic locations,
implementing sophisticated cache algorithms at the content servers or implementing advanced load
balancing algorithms and monitoring tools to redirect users to the most appropriate content server.

Table 1 below shows a summary of the different content delivery strategies and their advantages.
Content providers wanting to publish non-multimedia content will choose a traditional hosting
service as it is the cheaper connectivity solution and covers the minimum user requirements
sufficiently. Content providers that want to publish real-time resources or large amounts of data
(photos, videos, audio files, etc.), or simply want to offer a better QoE to the end user, will choose a
CDN service. The performance of a CDN inside an ISP or located at a vantage point is quite similar
although the former is slightly better. The content provider must evaluate if it is necessary to pay
more for an extra level of quality. Therefore, using a CDN solution with cache servers inside the ISP
is a good choice for content providers who believe that the quality of the services provided to their
customers (e.g. minimizing delay) are extremely crucial for their value chain.

Table 1: Summary of the Different Content Delivery Strategies.

Content Quality of Network Occupation Network Applications Target market


delivery Experience infrastructure of backbone management
solution costs resources (number of
DNS and
cache
servers)
CDN inside Extremely Very high Low (if Very high VoD and Big CPs
an ISP high content is video
highly streaming,
requested) photos, and
big files
CDN Very high High Medium (one High VoD and Big CPs and any
located in a (depending on flow per user video small or medium
vantage the number of request from streaming, CP whose
point at one PoP) the vantage photos, and business model is
hop point) big files focused on
delivering some
type of QoE
sensible content
Traditional Medium or Medium or High (one Medium or Web hosting Small and
hosting low low flow per user low medium CPs
request)

CREATING A MODEL OF INTERCONNECTION


The first contribution of this article is to provide a new approach for generating Internet graphs in
order to reveal the connectivity level of the current Internet players based on their interconnection
VOL. 3 JOURNAL OF INFORMATION POLICY 313

relationships. To this end, this section presents a model for proving the existence of a mesh Internet
topology.

There are many possibilities for performing this analysis. One approach could consist of sending a
survey to the different ISPs and content providers asking them to make their interconnection
agreements public. However, the operators will probably not provide this information because it is
usually considered confidential. A second approach could be to perform different traceroute tests
from multiple locations and to count the number of ISPs between end users and content providers.
This could be a tedious task due to the necessity of implementing a refined process to count the
number of ISP hops in each measurement. In addition, traceroute is sometimes filtered by routers
and firewalls, therefore many measurements would be inaccurate. A third option could be to model
an Internet topology for a geographic area based on the information of the management messages
sent between ASes. This third option was chosen for this study because of the availability of public
data for performing this analysis and the ease of building a model. Furthermore, as will be shown
later in the Results section, this approach is a good approximation of measuring distance between
ASes.

To create the Internet topology, this study proposes a novel approach that consists of a simulator
loaded with ISP information extracted from the CAIDA database. 19 CAIDA provides a database
with interconnection information about hundreds of ASes. We use CAIDA because this database
offers information about the interconnection relationships between AS neighbors based on the
analysis of BGP messages.

Figure 6: Recreation of a Partial Internet Graph.

19CAIDA (The Cooperative Association for Internet Data Analysis) is a collaborative undertaking among organizations
in the commercial, government, and research sectors aimed at promoting greater cooperation in the engineering and
maintenance of a robust, scalable global Internet infrastructure. This analysis uses database information from Jan. 10,
2012. See Cooperative Association for Internet Data Analysis, “AS Rank: AS Ranking,” accessed June 17, 2013,
http://as-rank.caida.org/.
VOL. 3 JOURNAL OF INFORMATION POLICY 314

The simulator generates an AS topology that shows the different interconnections between the
participant ISPs (see Figure 6 above). In order to generate the topology for a specific region, we first
loaded the simulator with a selection of 43 websites available in Spain. 23 These websites are hosted in
different ISPs that have different AS numbers. The selection consists of 21 sites from “the most
visited websites rank” created by Google 24 and a random selection of 22 long tail websites. Second,
we loaded the simulator with the AS numbers of Spain’s largest Access ISPs. 25 Third, we used the
CAIDA database to obtain the AS neighbors of each AS. Finally, the simulator processed the
routing information of each AS and obtained the number of hops between the different ISPs.

Figure 7: Methodology to Create the Network Map.

Although the simulator is loaded with the AS information from CAIDA, it needs to generate the
route paths using a routing algorithm. To this end, the simulator uses the AS numbers of the hosting
and Access ISPs and their neighbors, and generates a topology map applying the Dijkstra routing
algorithm (see Figure 7 above). The simulator uses the Dijkstra shortest path algorithm because it
cannot model the BGP operational unknown factors (e.g. ISP routing policies) and because the

23 The main reason for using a finite number of websites is because it would be unfeasible to model the entire Internet
and because we believe that these 43 sites are sufficiently representative to meet our objectives.
24 Google provides a service that ranks websites for customers who use its Ad Planner service. See Google, “Ad

Planner,” accessed June 17, 2013, http://www.google.com/adplanner.


25 Information on Spain’s Access ISPs was obtained from the Spanish national regulatory authority (CMT). See

CMTData, “Datos del Sector,” accessed June 17, 2013, http://cmtdata.cmt.es.


VOL. 3 JOURNAL OF INFORMATION POLICY 315

former offers a simpler approximation of the Internet topology based on the number of hops. This
could be a limitation because the model does not take into consideration features like traffic routing
policies or new AS relationships. However, this study expects to obtain a good approximation
between the model and the real Internet because inter-AS routing seeks to provide the shortest path
to reach a destination. 28

Table 2 below shows the structure of the hop matrix between different ASes generated after
applying the Dijkstra algorithm. Table 3 below shows an example of the information that can be
extracted from the hop matrix. The aim of this latter table is to present the number of hops between
Spain’s Access ISPs and the selected content provider ISPs.

Table 2: Hops between ASes.

Hops between Autonomous Systems


AS_1 AS_2 ... AS_n
AS_1 2 ... 2
AS_2 2 ... 1
... ... ... ...
AS_n 2 1 ...

Table 3: Average Hops between Access ISPs and CP ISPs.

Access ISP CP ISP Hops Average hops


AS_5 2
AS_1 AS_19 1
2
AS_34 3
AS_5 1
AS_19 1 1.33
AS_2
AS_34 2
... ... ... ...

To validate the accuracy of the model, we compare the Dijkstra algorithm with traceroute
measurements. The traceroute test is executed from Spain’s five largest Access ISPs and it provides
the number of AS hops to reach the same selection of websites as the model.

Results

To demonstrate the model’s accuracy, we used information from different ISPs and Internet content
providers operating in Spain. We anticipate that the results shown in this section can be extended to

28In backbone environments, with bandwidths in the gigabit-per-second range, the number of hops is critical for
offering the best QoE.
VOL. 3 JOURNAL OF INFORMATION POLICY 316

other geographic areas with similar conclusions due to the same organization of the Internet actors
in most European countries.

This section shows the results obtained after running the Dijkstra model and analyzing in-depth the
interconnection topology of Spain’s Internet players. The developed model generates a network
topology and then displays the average hop distribution among all the ISPs considered. Table 4
below shows the average number of hops between Spain’s Access ISPs and the analyzed websites,
which is on average 2.046 hops (weighted average).

Table 4: Access ISP Hops Distribution.

Average number of
hops to reach the
Access ISP websites selected
Telefónica 2.024
ONO 1.952
Orange 2.095
Jazztel 1.976
Vodafone 2.390
Weighted average 2.046

Spanish operators have a similar average number of hops and the obtained number confirms that
websites are relatively close to the end user. This low number of hops reveals that the current
Internet is really highly interconnected and tends to create a mesh. In addition, Telefónica, Orange
(France Telecom), and Vodafone would have obtained better performance if the model was able to
create default routes through their own backbone networks, which are highly interconnected with
the rest of the carrier operators. These large operators have sibling-to-sibling interconnection
relationships between their access and backbone networks that the model does not consider as one
AS hop because they are from the same corporation.

Figure 8 below illustrates the residential market shares of the Access ISPs studied.
VOL. 3 JOURNAL OF INFORMATION POLICY 317

Figure 8: Access ISP Residential Market Share in Spain. 29

Table 5 below shows the average number of hops between the hosting ISPs (from our selection of
websites) and Spain’s Access ISPs. In that case, the average number of hops is exactly the same as
the Access ISPs because this table shows the same results as Table 4 but from a Hosting ISP point
of view. Table 4 and Table 5 show that end users are on average located two hops away from web
content. A remarkable aspect of Table 5 is the set of ISPs that top the list. On one hand, top
positions are occupied by Tier 1 ISPs (Telefónica, Level3, Cogent, Colt, and BT) and by “hyper-
giant” international content providers (Microsoft, Yahoo, Google, Amazon, and Facebook). On the
other hand, the rest of the content providers and the long tail content providers use hosting
solutions that are also located in vantage points not too far from the end user.

Table 5: Average Hops to Reach an Access ISP.

Average Average Average


number of number of number of
hops to reach hops to reach hops to reach
Content ISP an Access ISP Content ISP an Access ISP Content ISP an Access ISP
Telefónica 1.062 Facebook 2 Arsys 2.085
Easynet 1.265 Google 2 Unidad Editorial 2.140
Level3 1.554 Microsoft 2 Dinahosting 2.514
Tuenti 1.616 OneAndOne 2 Softonic 2.554
Cogent 1.779 OVH 2 Twitter 2.554
Prisa 1.851 Peer 1 2 IDH 2.640
Colt 1.860 SoftLayer 2 Wikimedia 2.874
BT 1.959 Yahoo 2 IAC 3
Weighted average (considering that some ISPs host more than one site) 2.046

29Data compiled from Kingdom of Spain, Comisión del Mercado de las Telecomunicaciones, “Informe geográfico,”
white paper, June 2012, accessed July 3, 2013, http://www.cmt.es/c/document_library/get_file?uuid=4f2cc43d-69b6-
4c7c-8f4e-a328ab06e869&groupId=10138, 2-12 (2011 data).
VOL. 3 JOURNAL OF INFORMATION POLICY 318

As mentioned previously, we validate the model performance executing a traceroute test from the
five largest Access ISPs in Spain (Telefónica, ONO, Orange, Jazztel, and Vodafone). The traceroute
test counts the number of AS hops between the Access ISP and the selected websites. The objective
of this task is to compare the number of hops obtained using the model with the obtained number
of hops after executing the traceroute test in a real environment.

Table 6 below shows that the proposed model obtains fairly similar results compared to the
traceroute test. The model obtains a cosine similarity coefficient close to one, which validates the
accuracy (goodness of fit) of the model for generating Internet topologies. The hop count difference
between the model and the traceroute test exists because real routing does not always rely on the
shortest path to reach destinations. Traceroute is based on BGP and follows predefined policies like
prioritizing IXP routes (in the case of Jazztel and Vodafone) or sibling-to-sibling interconnections
(in the case of Telefónica, Orange, and Vodafone). In addition, BGP can configure multiple paths to
the same destinations, which is something that our model does not allow.

Table 6: Comparison of Average Hops Using Traceroute and the Dijkstra Model.

Telefónica Ono Orange Jazztel Vodafone


Traceroute 2.357 2.024 2.476 2.071 2.857
Dijkstra Model 2.024 1.952 2.095 1.976 2.404
Hop count
0.3 0.1 0.4 0.1 0.5
difference
Cosine
similarity 0.967 0.986 0.977 0.961 0.949
coefficient

Despite the differences, we consider that the proposed model is a good tool for generating Internet
topologies and for visualizing the high interconnection level in the current Internet ecosystem.

INSPECTING THE ANATOMY OF CONTENT DELIVERY


The previous section confirmed a high degree of connectivity between ISPs. This means that almost
all content is close to the end user, in terms of the number of hops. However, this fact does not
confirm the presence of CDNs. To confirm their existence, in this section the article makes its
second contribution by analyzing in-depth where exactly the content of a website is located and
identifying which content delivery solution each site is using.

It is commonly known that most of the large Internet companies use some kind of CDN solution
for distributing their Internet content. However, this section aims to locate exactly where these
CDNs are and to identify which strategies the companies are using. CDNs were designed to
transport and cache large amounts of Internet content, such as HTML code, JavaScript, large files,
VOL. 3 JOURNAL OF INFORMATION POLICY 319

images, audio, and video. The most appropriate way to identify whether a website is using a CDN
service is to inspect its HTML code looking for URLs linked to CDN providers. The redirection of
a link to an external hosting company will be clear evidence that a particular website is using a CDN
provider.

To perform this task, we carefully analyzed the 43 websites 30 selected in the previous section (see
Table 7 below) and inspected their HTML code looking for image URLs. Then, we input these
URLs in the nslookup tool to obtain the IP addresses of the servers hosting the images.
Additionally, nslookup returns CNAMEs for those URL aliases of the websites that use CDN
solutions. Finally, each IP address was searched in a database to identify its owner ISP and, in
combination with the interpretation of the CNAME, we determined the type of content delivery
solution used by the different sites.

Table 7: Website Selection.

Type of site Sites


Large Content google.com, youtube.com, facebook.com, live.com, msn.com, yahoo.com, tuenti.com,
Provider sites wikipedia.org, marca.com, microsoft.com, as.com, elmundo.es, ask.com, eltiempo.es,
elpais.com, lacaixa.es, wordpress.com, rtve.es, softonic.com, twitter.com, bing.com
Long Tail sites chipspain.com, onabarcelona.com, ovellanegra.com, lacasadelosdisfraces.es, fibratel.es,
www.planafabrega.com, km77.com, hotelmajestic.es, enriquetomas.com, viladepiera.cat,
autoprint.es, labraseria.es, floristeriasnavarro.com, restaurantegorria.com, copiservei.com,
inoutalberg.com, corchosgomez.com, barcelonavinos.es, dondisfraz.com,
forocoches.com, www.condis.es, hoteles-catalonia.com

One conclusion after analyzing the selection of websites is that top-ranked sites tend to use CDN
solutions while less-visited sites (usually classified as long tail sites) use traditional Internet hosting.
The selection of one content delivery solution or another depends on the website’s requirements.

Table 8 below shows a classification of the content delivery services used by the analyzed sites in
which can be seen a wide variety of solutions. In the following list we describe each type of delivery
service: 31

• Hosting: Operators whose business models are focused on hosting content


in data centers located in vantage points.

30 This study analyzes a limited number of websites. However, this sample is considered representative because the large

content providers selected represent most of the Internet traffic (for example, according to 2013 Sandvine Report,
YouTube consumes near 20% of the total downstream). Therefore, with only few websites it is possible to create an
accurate classification of the different CDN types. See Sandvine, “Global Internet Phenomena Report,” report (2013),
accessed June 19, 2013,
http://www.sandvine.com/downloads/documents/Phenomena_1H_2013/Sandvine_Global_Internet_Phenomena_Re
port_1H_2013.pdf.
31 Adapted from Body of European Regulators for Electronic Communications.
VOL. 3 JOURNAL OF INFORMATION POLICY 320

• Pure CDN: Network operators whose business models are focused on


delivering content through their cache servers and renting link capacity to
connect directly to the vast majority of Access ISPs.
• Tier 1 CDN: Tier 1 operators that are capable of offering CDN services
using their network infrastructures and deploying cache servers at vantage
points.
• Telco: Access operators that are capable of offering CDN or hosting
services.
• CP: Large content providers that deliver their content using their own
network infrastructures.

Table 8: Content Delivery Classification.

Type of delivery service ISPs


Hosting Acens, Arsys, IDH Telvent, Dinahosting, Easynet, OneAndOne, OVH,
Softlayer
Pure CDN Akamai, Edgecast, Amazon Cloudfront
Tier 1 CDN Level 3, NTT, Telia, Cogent
Telco Telefónica, Colt, BT, Digiweb
CP Google, Microsoft, Yahoo, El Pais, Softonic

According to our dataset of websites, large content providers are more likely to use pure CDNs, Tier
1 CDNs, and their own CDN solutions in the same proportion, as shown in Figure 9[a] below. On
the other hand, long tail sites prefer hosting services as a first option and telco solutions as a second
option, as shown in the pie chart in Figure 9[b]. The pie chart in Figure 9[c] shows that for the 43
analyzed websites the content delivery marketplace is quite diverse. Nevertheless, this last diagram
does not take into account that a pure or Tier 1 CDN will offer a better user experience although at
a higher economic cost than a hosting solution, and that the amount of Internet traffic delivered by
large content providers is on a different scale compared with the amount delivered by the long tail
content providers. Therefore, each site must select the best solution based on its requirements.
VOL. 3 JOURNAL OF INFORMATION POLICY 321

Figure 9: Content Delivery Solutions.

Table 9 below examines the case of Akamai in detail, based on the analysis of the IP addresses
returned by the DNS servers of different Access ISPs. We can observe that almost all large Access
ISPs in Spain have Akamai cache servers inside their premises or at a distance of one hop. Akamai is
the world’s leading CDN, present in more than 1900 networks in 70 countries, serving nearly 30%
of global Internet traffic, 32 making it the prime example of this CDN paradigm. Other ISPs like
Level3 or NTT have located their servers in data centers close to the Access ISPs. Google, the
largest content provider in the world, which is estimated to contribute between 6% and 10% of
global Internet traffic, 33 also uses vantage points to host its servers as close as possible to the Access
ISPs.

Table 9: Access ISPs Using CDNs in Their Premises.

Access ISP CDN Strategy Returned IP(s) Details of the AS


194.224.66.42, AS6813 Flexnet Com.
Telefónica Akamai inside the ISP
194.224.66.19 Int. de Telefónica
Akamai at one hop 213.248.113.32,
ONO AS1299 Telia Net
through Telia 213.248.113.24
212.106.219.176, AS12715 Jazz Telecom
Jazztel Akamai inside the ISP
212.106.219.130 Global Spanish ISP
90.84.53.16, 90.84.53.59, AS5511 Opentransit
Orange Akamai inside the ISP 90.84.53.64, 90.84.53.81, France Telecom –
90.84.53.9 Orange IP Backbone
AS20940 AKAMAI-
Akamai at one hop 92.123.73.59, ASN1 Akamai
Vodafone
through Espanix IXP 92.123.73.81 Technologies
European AS

32 Akamai Technologies, “Akamai and Riverbed to Accelerate Applications over Hybrid Cloud Networks,” press release,

May 10, 2011, accessed June 19, 2013, http://www.akamai.com/html/about/press/releases/2011/press_051011.html.


33 Craig Labovitz, “How Big is Google?” ArborSert, Mar. 16, 2010, accessed June 19, 2013,

http://ddos.arbornetworks.com/2010/03/how-big-is-google/.
VOL. 3 JOURNAL OF INFORMATION POLICY 322

Another interesting aspect of this analysis is that Tier 1 ISPs are competing in two side markets.
They continue offering Tier 1 transit services to interconnect ISPs and at the same time they use
their own networks to offer CDN services to content providers. Moreover, we can observe that
there are some Access ISPs with international backbones that compete in both transit and CDN
businesses and also offer Internet connectivity to residential users.

CONTENT DELIVERY IMPLICATIONS


ISPs are focused on offering new delivery services because (1) they need to satisfy the content
providers’ demand for faster connectivity solutions; and (2) they need to compensate for the transit
services’ decrease in profitability. 34 ISPs can offer faster solutions in many ways. One of these ways
is to apply QoS policies to prioritize the IP packets from the content provider. As an alternative,
ISPs can deploy content cache servers inside their networks.

Applying QoS mechanisms could be seen as the natural strategy for ISPs to accelerate some content
traffic over the rest, as they have already deployed mature technologies such as MPLS. 35 The large-
scale deployment of these mechanisms would be more a problem of coordination between operators
than a technological issue. However, prioritizing some content could degrade other traffic flows and
thus violate the non-discrimination principle of the Internet. The implicit degradation in the
transmission quality of one type of content in comparison to other types may be seen as an anti-
network neutrality practice. In this context, regulators like FCC in United States 36 and BEREC in
Europe 37 are taking action on the matter.

In contrast, CDNs accelerate the delivery of content without directly degrading the rest of the IP
traffic. In the previous section we observed that large content providers use CDN solutions to
deliver their services. The use of this strategy has a huge impact on the core of the Internet, as most
of the information is bypassed to the Access ISP. According to the most recent Sandvine report,
YouTube as distributed by the Google CDN and Netflix as distributed by different CDNs (Akamai,
Level3, LimeLight, and its own solution) represent 50% of total Internet traffic. 38 Such an amount
of traffic “magically” bypasses the core of the Internet, enabling the decongestion of the network.
This also has other implications as it breaks up the traditional tiered hierarchy described in the
second section of this article, and leads us to a mesh model in which all ISPs want to increase their
interconnections.

34 William B. Norton, “Internet Transit Prices – Historical and Projected,” report, DRPeering International, Aug. 2010,
accessed June 19, 2013, http://drpeering.net/white-papers/Internet-Transit-Pricing-Historical-And-Projected.php.
35 MPLS is Multiprotocol Label Switching, a network technology that offers the possibility of applying QoS and traffic

engineering techniques
36 United States, Federal Communications Commission, Preserving the Open Internet; Broadband Industry Practices, Report and

Order, GN Docket No. 09-191/WC Docket No. 07-52, Dec. 23, 2010, accessed June 19, 2013,
http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-10-201A1.pdf.
37 Body of European Regulators for Electronic Communications.
38 Sandvine.
VOL. 3 JOURNAL OF INFORMATION POLICY 323

It is important to note that both strategies, applying traffic prioritization and deploying CDNs, have
the same main goal in common: accelerating the delivery of specific content. Large content
providers want to accelerate their content delivery and it does not matter whether this is done using
prioritization or CDNs. Also, from the user’s perspective no explicit difference may be observed
because users perceive similar QoE. The interesting part is how different regulators are dealing with
the different strategies. QoS mechanisms, which are the natural and possibly the more cost-effective
option for ISPs, are considered an anti-network neutrality practice. In contrast, CDNs are not being
considered as violating network neutrality principles, although they offer “faster lanes” for those
content providers who can afford it, possibly also leading to a two-class (or more) Internet. In this
context, one can argue that CDNs are not degrading the rest of the traffic, but how can a long tail
video website compete against a “hyper-giant” whose content is distributed using high speed
connections?

The impact of preferential interconnection agreements between large content providers and ISPs
could affect competition in the rest of the Internet ecosystem. Content providers with significant
market power to hire content-accelerating services would obtain a better user perception, which can
translate into higher popularity. End users would only access video websites from dominant content
providers as the rest of video sites would offer a poor user experience. This scenario would have the
consequence that long tail websites could hardly compete as they would not be able to pay for
prioritized traffic or CDN services. 39 Consequently, the Internet could be dominated by a few hyper-
giant content providers that would aggregate most of the Internet’s content (an oligopoly), in
alliance with ISPs and CDNs that only provide fast pipes to those who are willing to pay for them.
In this situation, small websites would be forced to delegate part of their core business (e.g. the
marketing channel) in return for being aggregated into the ecosystem of the hyper-giants. 40

Thus, we encourage policymakers to address how CDNs affect network neutrality because their
potential effects on the Internet ecosystem are potentially the same as those of traffic prioritization.
Both strategies offer similar user perceptions in which some content is delivered with better quality
than others, and where one must pay for this increase in quality. Therefore, we conclude that the
debate over network neutrality should also include CDNs.

Becoming Part of the CDN

CDNs have also had a big impact in the interconnection ecosystem because they have changed the
way ISPs offer connectivity solutions. CDNs have generated new business opportunities in response
to the price reduction in transit costs. The previous section showed that heterogeneous content
providers can choose between different content distribution solutions, confirming that the supply of
services is wide and that operators are investing in this type of network solution.

39Luke Collins, “In Neutral,” Engineering and Technology 5, no. 11 (July 24, 2010): 60-62.
40This is similar to the apps ecosystem in which two platforms, Android and Apple, control the distribution of
applications through their exclusive online stores
VOL. 3 JOURNAL OF INFORMATION POLICY 324

Hence, this study confirms what Wulf et al. previously observed 41 and reveals that many carriers like
Level3, NTT, Telia, or AT&T have deployed, acquired, or resold CDN solutions to compete in this
emerging market. They are transforming their businesses and compensating for the price reduction
of transit service offering CDN services where prices are considerably higher than for pure transit
services. With this strategy, carriers are taking advantage of their existing networks and are deploying
CDNs to obtain higher revenues. Figure 10 below illustrates the carrier strategy for entering into the
CDN market.

Figure 10: ISP Strategy to Compete in the CDN Marketplace.

Moreover, the emergence of ISPs into the CDN marketplace may foster the deployment of new
enhanced services such as cloud computing or advanced security applications for both residential
and enterprise markets. Therefore, the CDNs are more than accelerating or caching content, they
are a new business opportunity that will open new doors to innovation and to the commercialization
of new services.

Spanish ISPs are aware of this tendency, but only a few of them are offering CDN solutions. Only
Telefónica and Orange are explicitly offering CDN services, though with different strategies.
Telefónica has recently decided to deploy its own CDN while Orange has chosen to resell Akamai
services. The rest of the operators are probably already assessing the possibility of offering CDN
services and in the coming months there will possibly be new announcements. Table 10 below
shows the current status of CDN solutions offered by the largest Spanish operators. Nevertheless,

Jochen Wulf, Rüdiger Zarnekow, Thorsten Hau, and Walter Brenner, “Carrier Activities in the CDN Market – An
41

Exploratory Analysis and Strategic Implications,” Proceedings of the 2010 14th International Conference on Intelligence in Next
Generation Networks (2010): Section 2A.
VOL. 3 JOURNAL OF INFORMATION POLICY 325

all these operators are connected to Espanix, Spain’s Internet eXchange Point (IXP), 42 which means
that those who do not yet have a CDN solution are in an optimal position to resell a third-party
CDN service.

Table 10: CDN Strategies of Spain’s ISPs.

Carrier CDN / Hosting Services Type of Source


Provider Cooperation
Telefónica Telefónica Web optimization, video In-house http://www.Telefónica.
services, smart download, development com/cdn/en/
private delivery
ONO ONO Traditional hosting In-house http://www.ono.es/em
development presas/productos/inter
net/alojamiento/
Orange Akamai Web optimization, video Reselling http://www.akamai.co
services, smart download, m/html/about/press/r
private delivery eleases/2012/press_112
012_1.html
Vodafone Vodafone Cloud services and In-house http://www.vodafone.e
traditional hosting development s/empresas/es/solucio
nes-
unificadas/servicios-en-
la-nube/hosting/
Jazztel Jazztel Traditional housing In-house http://empresas.wholes
development ale.jazztel.com/servicios
/wholesale

CONCLUSIONS
This article conducted an empirical analysis of the impact of CDNs on Spain’s Internet ecosystem.
The article aims to answer the following questions: (1) What is the distance, in terms of operator
hops, among content providers and Access ISPs?; and (2) What are the content delivery strategies of
large and long tail content providers? Regarding the first question, this article contributes a novel
methodology for modeling the Internet’s topology based on the Dijkstra routing algorithm. The
model’s performance is validated using traceroute measurements with a satisfactory result. The study
reveals that web content is on average two hops away from the end user. This distance is relatively
small, but the QoE perceived by end users could be completely different depending on the type of
content delivery solution. Answering the second question, the study identifies the different content
distribution solutions that the various content providers are using, with a focus on the use of CDNs.
The study analyzes the URLs of different image resources in a set of websites, and the nslookup tool
confirms network redirections pointing to CDNs. The results obtained from the study reveal that
big content providers tend to use some type of CDN solution to spread their content over the
Internet while websites from the long tail use traditional hosting.

42 See Espanix, Peering entre miembros, accessed June 19, 2013, http://www.espanix.net/esp/peering.htm.
VOL. 3 JOURNAL OF INFORMATION POLICY 326

Among the different CDN solutions, two types of architecture can be distinguished. The first type
of CDN solution (mainly used by Akamai) deploys its cache servers inside the different target ISPs.
The second type (Level3, NTT, Cogent, etc.) deploys its cache servers in vantage points very well-
connected (at only one hop) to the target ISPs. The first type obtains a better performance in terms
of latency and throughput at the expense of a greater investment in the number of servers and
management, whereas the second type obtains slightly worse performance but with lesser
maintenance and a smaller number of servers while offering a more competitive price.

The content delivery sector is an expanding market with a high level of competition, as can be
observed by the heterogeneity of content distribution solutions used by content providers. This
study finds that content providers use many different content delivery solutions and provides a
classification depending on the nature of the solution. The study identifies content delivery solutions
from pure CDNs, from Tier 1 ISPs, from access ISPs, from large content providers, and from
traditional hosting ISPs. Some of the conclusions after the analysis are that traditional Tier 1 ISPs
have evolved from simple carriers and now offer content delivery services close to the end user.
Access ISPs have also started to deploy CDNs to distribute their own and external services, and at
the same time they continue offering classic hosting solutions. Pure CDN ISPs have emerged and
have deployed their data centers and networks at vantage points close to the end user. Large content
providers are also involved in the CDN business because they have found that offering an optimum
QoE is the key to success. Finally, long tail content providers choose hosting solutions from third
parties located in vantage points that also offer the possibility of upgrading their services through
cloud and CDN solutions.

Finally, this article discusses the impact of preferential interconnection agreements between large
content providers and ISPs, and the potential consequences for the rest of the Internet ecosystem.
The article suggests that policymakers should analyze whether CDNs have consequences for
network neutrality. The article also discusses how ISPs have begun to take part in the content
delivery business by deploying their own CDNs, acquiring existing CDNs, or reselling CDN services
from third parties.

Future Implications of CDNs in the New Internet Ecosystem

As this study states, large content providers commonly use CDNs in the distribution of their
content while long tail content providers tend to use traditional hosting solutions. By analyzing the
“hyper-giants” (Google, Yahoo, Microsoft, Amazon, etc.), one can observe that they are not only
content providers – they are basically content aggregators. This means that these hyper-giants are
attracting content from smaller sites or individuals and publishing it via their high-speed
infrastructures. Somehow a cannibalization process has begun in which the hyper-giants are
absorbing content from the long tail, entering fully into the niche of the traditional hosting
companies.

Taking as examples a user who wants to publish his own blog, or an SME (small and medium-size
enterprise) that wants to create its webpage, the hyper-giants provide many easy and specialized
VOL. 3 JOURNAL OF INFORMATION POLICY 327

solutions to cover these necessities. In the first example, users can create their own blogs, using for
example Blogger from Google, and take advantage of the Google CDN and its excellent network
performance at zero cost. In the second example, an SME can create its own website using Google
Sites for plain HTML5 pages or Google App Engine and Amazon Web Services for more
sophisticated websites. So an SME can publish its website for free until the site exceeds a traffic limit
or network resources, which is when the hyper-giants begin to charge the SME. Therefore, hyper-
giants offer free minimum services (or not-so-minimum depending on your site requirements) to
test their environment and when the free quota is exceeded, they begin to charge you for the
consumed resources with competitive prices.

This new scenario has many advantages for end users and SMEs. They obtain free hosting and a free
subdomain (example.blogger.com, example.appspot.com, etc.), several enterprise tools (e-mail,
calendars, etc.), search engine optimization (SEO) capacities, and the high-speed connectivity and
high availability of a CDN. However, the hyper-giants offer these “free services” as closed products
and impose their operating methodology through their own Application Programming Interfaces
(API) and Content Management Systems (CMS). In addition, they do not usually offer a technical
service, but they offer FAQs and technical forums where specialists and current users share their
knowledge about a specific issue. The reason for this approach is that their cloud environments are
completely ready to publish content and that user problems can be solved by consulting online
forums. Therefore, the degree of freedom of customization (web technologies, system performance,
etc.) is limited in comparison with the hosting solutions, but they are in constant evolution offering
more and better adapting services every day.

So, what is the benefit for the hyper-giants? The hyper-giants offer these free services because they
obtain more potential users for other services, whom they can require to use their integrated
accounts (e.g. Google Accounts), and because these long tail sites can be used to position
advertisements from third parties, which possibly is the core business of the hyper-giant. Other
secondary arguments for offering such services is that they provide an economic benefit when the
sites exceed the minimum quotas, and that the hyper-giants can enhance the life cycle of their
environments thanks to the contributions (suggestions, error detections, etc.) of the long tail users
through Web 2.0 tools such as working groups, wikis, and forums.

Finally, there are many open questions about the future of the Internet ecosystem. Will content
providers tend to choose hyper-giant CDN solutions instead of traditional hosting? Will this
situation lead to a scenario featuring an overlay content network formed by the hyper-giants’ CDN
solutions, in which traditional hosting ISPs could be removed from the marketplace? Will the
hosting ISPs be converted to resellers of the hyper-giants or carrier ISPs? Could this emerging
situation generate oligopolies that could affect competition?
VOL. 3 JOURNAL OF INFORMATION POLICY 328

GLOSSARY

API: Application Programming Interface ISP: Internet Service Provider

AS: Autonomous System IXP: Internet eXchange Point

BGP: Border Gateway Protocol LDNS: Local Domain Name Server

CDN: Content Distribution Network or MPLS: Multi-Protocol Label Switching


Content Delivery Network

CMS: Content Management System POP: Point of Presence

CNAME: Canonical Name QoE: Quality of Experience

CP: Content Provider QoS: Quality of Service

DNS: Domain Name Server SEO: Search Engine Optimization

FAQ: Frequent Asked Questions SLA: Service Level Agreement

HTML: HyperText Markup Language SME: Small and Medium-size Enterprises

IP: Internet Protocol URL: Uniform Resource Locator


VOL. 3 JOURNAL OF INFORMATION POLICY 329

BIBLIOGRAPHY

Akamai Technologies. “Akamai and Riverbed to Accelerate Applications over Hybrid Cloud
Networks.” Press release, May 10, 2011. Accessed June 19, 2013,
http://www.akamai.com/html/about/press/releases/2011/press_051011.html.
Body of European Regulators for Electronic Communications. “BEREC Guidelines for Quality of
Service in the Scope of Network Neutrality.” White paper BoR (12) 32, May 29, 2012. Accessed
June 17, 2013, http://berec.europa.eu/files/news/bor_12_32_guidelines.pdf.
Clark, David D., William Lehr, and Steven Bauer. “Interconnection in the Internet: The Policy
Challenge.” Paper presented at the Telecommunications Policy Research Conference, Arlington
VA, Sept. 2011. Accessed June 17, 2013,
http://groups.csail.mit.edu/ana/Publications/Interconnection_in_the_Internet_the_policy_chal
lenge_tprc-2011.pdf.
Collins, Luke. “In Neutral.” Engineering and Technology 5, no. 11 (July 24, 2010): 60-62.
Cooperative Association for Internet Data Analysis. “AS Rank: AS Ranking.” Accessed June 17,
2013, http://as-rank.caida.org/.
Cormen, Thomas H., Charles E. Leiserson, Ronald L. Rivest, and Clifford Stein. Introduction to
Algorithms, 2nd ed. Cambridge, MA: MIT Press, 2001.
Dhamdhere, Amogh and Constantine Dovrolis. “The Internet Is Flat: Modeling the Transition from
a Transit Hierarchy to a Peering Mesh.” Co-NEXT ’10: Proceedings of the 6th International Conference
(2010): Article 21.
Dimitropoulos, Xenofontas, Dmitri Krioukov, Marina Fomenkov, Bradley Huffaker, Young Hyun,
kc claffy, and George Riley. “AS Relationships: Inference and Validation.” SIGCOMM Computer
Communications Review 37, no. 1 (2007): 31-40.
Gao, Lixin. “On Inferring Autonomous System Relationships in the Internet.” IEEE/ACM
Transactions on Networking 9, no. 6 (2001): 733-745.
Hawkinson, John and Tony Bates. “Guidelines for Creation, Selection, and Registration of an
Autonomous System (AS).” Technical standards document RFC1930, Mar. 1996, accessed June
17, 2013, http://tools.ietf.org/html/rfc1930.
Huang, Cheng, Angela Wang, Jin Li, and Keith W. Ross. “Understanding Hybrid CDN-P2P: Why
Limelight Needs Its Own Red Swoosh.” NOSSDAV ’08: Proceedings of the 18th International
Workshop on Network and Operating Systems Support for Digital Audio and Video (2008): 75-80.
Kingdom of Spain, Comisión del Mercado de las Telecomunicaciones. “Informe geográfico.” White
paper, June 2012. Accessed July 3, 2013,
http://www.cmt.es/c/document_library/get_file?uuid=4f2cc43d-69b6-4c7c-8f4e-
a328ab06e869&groupId=10138, 2-12.
Labovitz, Craig. “How Big is Google?” ArborSert, Mar. 16, 2010. Accessed June 19, 2013,
http://ddos.arbornetworks.com/2010/03/how-big-is-google/.
Labovitz, Craig, Scott Iekel-Johnson, Danny McPherson, Jon Oberheide, and Farnam Jahanian.
“Internet Inter-Domain Traffic.” SIGCOMM ’10: Proceedings of the ACM SIGCOMM 2010
Conference (2010): 75-86.
Norton, William B. “Internet Transit Prices – Historical and Projected.” Report, DRPeering
International, Aug. 2010. Accessed June 19, 2013, http://drpeering.net/white-papers/Internet-
Transit-Pricing-Historical-And-Projected.php.
Rayburn, Dan. “EdgeCast Says CDN Federation Taking Hold, Details How Operators Are
Exchanging Traffic.” Streaming Media Blog. Accessed June 17, 2013,
VOL. 3 JOURNAL OF INFORMATION POLICY 330

http://blog.streamingmedia.com/the_business_of_online_vi/2012/05/edgecast-says-cdn-
federation-taking-hold-details-how-operators-are-exchanging-traffic.html.
Sandvine. “Global Internet Phenomena Report.” Report (2013). Accessed June 19, 2013,
http://www.sandvine.com/downloads/documents/Phenomena_1H_2013/Sandvine_Global_I
nternet_Phenomena_Report_1H_2013.pdf.
Shavitt, Yuval and Udi Weinsberg. “Topological Trends of Internet Content Providers.” SIMPLEX
‘12: Proceedings of the Fourth Annual Workshop on Simplifying Complex Networks for Practitioners (2012):
13-18.
Siganos, Georgos, Michalis Faloutsos, and Christos Faloutsos. “The Evolution of the Internet:
Topology and Routing.” Working paper (2001). Accessed June 17, 2013,
http://static.cs.ucr.edu/store/techreports/UCR-CS-2002-05065.pdf.
United States, Federal Communications Commission. Preserving the Open Internet; Broadband Industry
Practices. Report and Order, GN Docket No. 09-191/WC Docket No. 07-52, Dec. 23, 2010.
Accessed June 19, 2013, http://hraunfoss.fcc.gov/edocs_public/attachmatch/FCC-10-
201A1.pdf.
Wulf, Jochen, Rüdiger Zarnekow, Thorsten Hau, and Walter Brenner. “Carrier Activities in the
CDN Market – An Exploratory Analysis and Strategic Implications.” Proceedings of the 2010 14th
International Conference on Intelligence in Next Generation Networks (2010): Section 2A.

Potrebbero piacerti anche