Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Burkhard Stiller
Bachelor Thesis
Live P2P Video Streaming
Framework
Stefan Zehnder
Zürich, Switzerland
Student ID: 02-918-563
University of Zurich
Department of Informatics (IFI)
Binzmuehlestrasse 14, CH—8050 Zurich, Switzerland
Diploma Thesis
Communication Systems Group
Department of Informatics (IFI)
University of Zürich
Binzmuehlestrasse 14, CH—8050 Zurich, Switzerland
URL: http://www.csg.unizh.ch/
Live P2P Video Streaming Framework
Table of Contents
1 Introduction ............................................................................................................4
1.1 Introduction and Motivation ................................................................................................... 4
1.2 Description of Work and Goals ............................................................................................. 5
2 Related Work ..........................................................................................................6
2.1 Peer-to-peer Networks .......................................................................................................... 6
2.1.1 Pastry ...................................................................................................................... 6
2.2 Multiple Description Coding .................................................................................................. 7
2.2.1 Sub-Sampling Multiple Description Coding (SMDC) ............................................... 8
2.2.2 Multiple Description Motion Coding (MDMC) ........................................................... 8
2.2.3 Multiple State Video Coder (MSVC) ........................................................................ 9
2.2.4 Motion-Compensation Multiple Description Video Coding (MCMD) ........................ 9
2.2.5 Comparison of the MDC schemes ........................................................................... 9
2.2.6 Hierarchically Layer Encoded Video (HLEV) ......................................................... 10
2.3 Video Handling in Java ....................................................................................................... 10
2.3.1 Java Media Framework API (JMF) ........................................................................ 10
2.3.2 FOBS4JMF and JFFMPEG ................................................................................... 10
2.3.3 Freedom for Media in Java (FMJ) .......................................................................... 11
2.3.4 VideoLAN - VLC media player ............................................................................... 12
2.3.5 Summary ............................................................................................................... 12
2.4 P2P Video Streaming .......................................................................................................... 12
2.4.1 Resilient Peer-to-Peer Streaming .......................................................................... 13
2.4.2 Distributed Video Streaming with Forward Error Correction .................................. 13
2.4.3 SplitStream ............................................................................................................ 14
2.4.4 CoolStreaming/DONet ........................................................................................... 15
2.4.5 PULSE ................................................................................................................... 15
3 Live P2P Video Streaming Framework ..............................................................17
3.1 Envisioned Scenario ........................................................................................................... 17
3.2 System Components Overview ........................................................................................... 17
3.3 Design of the Components ................................................................................................. 19
3.3.1 Subcomponents of the P2P Manager .................................................................... 19
3.3.2 Subcomponents of the MDC Manager .................................................................. 21
3.4 Implementation of the Components .................................................................................... 24
3.4.1 Subcomponents of the P2P Manager .................................................................... 24
3.4.2 Subcomponents of the MDC Manager .................................................................. 26
3.5 Implemented MDC Scheme ................................................................................................ 28
3.6 Algorithm to Search for a possible Source .......................................................................... 29
3.7 Network Structures and Communication Aspects ............................................................... 30
3.7.1 Monitor the Status of Connected Peers ................................................................. 30
3.7.2 Peer disconnects the Stream ................................................................................. 31
3.7.3 Protocols ................................................................................................................ 32
4 Performance Evaluation .....................................................................................34
4.1 Framework Evaluation ........................................................................................................ 34
4.2 Problems ............................................................................................................................. 34
4.2.1 DHT - PAST ........................................................................................................... 34
4.2.2 MDC ....................................................................................................................... 35
4.2.3 Media Handling in Java ......................................................................................... 36
4.2.4 VLC - JVLC ............................................................................................................ 36
5 Future Work .........................................................................................................37
6 Acknowledgements .............................................................................................38
1 Introduction
The first chapter gives a short introduction into this work and description of this work’s
goals.
single video stream into several substreams (descriptions), which can be routed on several
paths to the target. The more substreams a peer receives the better the quality of the video
stream. To play the stream only one substream is needed. MDC allows better failure-
resistance than a single video stream. A peer can always choose the number of
substreams it wants to receive according to its available bandwidth.
2 Related Work
This chapter gives an overview of related work in the area of P2P video streaming and
multiple description coding, and an overview of the available libraries for media handling in
Java.
2.1.1 Pastry
Pastry is a Peer-to-Peer overlay network. The nodes in a Pastry network form a
decentralized self-organized network that can be used to route messages from one node to
another. FreePastry [20] is an open-source implementation of Pastry and is intended to
serve as platform for performing research and the development of P2P applications.
Several applications have been built on top of FreePastry, for example a storage utility
called Past [4], a scalable publish/subscribe system called Scribe [5] and many others are
under development.
Each Pastry node has a unique node-ID, a 128-bit numeric node identifier. All the nodes are
arranged in a circular space in which the node-ID is used to determine the nodes position.
Before joining a Pastry system, a node chooses a node-ID. The Pastry system allows
arbitrary node-IDs. Commonly, a node-ID is formed as the hash value of the node’s IP
address. Each node maintains a routing table, a neighborhood set and a leaf set.
The routing table is made up of several rows with multiple entries per row. On node n, the
entries in row i hold the identities of Pastry nodes whose node-IDs share an i-digit prefix
with n. For example, entries in the first row (i=0) have no common prefix with n. This gives
each node a rough knowledge of other nodes, nodes that are distant in the identifier space.
The neighborhood set contains node-IDs which are closest to the local node, regarding
network proximity metric, and are used to maintain local properties. This can for example
be used to store replicate information on the neighbor nodes. Replication is a fault-tolerant
way to store data on other nodes by making a copy of the original data.
The routing table sorts node-IDs by prefix. The leaf set of node n holds all the nodes with
the numerically closest node-ID to n. The leaf set and the routing table provide information
relevant for routing.
When a node n receives a message to be routed, the node first checks its leaf set. If the key
to the target node falls within the range covered by its leaf set, the message will directly
forwarded to the node whose ID is closest to the target key. If there is no match in the leaf
set, the routing table will be used to forward the message over a longer distance. This is
handled by trying to pass the message on to a node which shares a longer common prefix
with the message key than the key (node-ID) of node n. If there is no such entry in the
routing table, the message is forwarded to a node which shares a prefix of the same length
with the message key as n but which is numerically closer to the message key than n.
The Pastry network is self-organizing. This means Pastry has to deal with node arrivals,
departures, and failures. At the same time Pastry needs to maintain a good routing
performance. A new node n joins the network over a well known Pastry node k, for
bootstrapping. The node n copies its initial neighborhood set from k since k is considered to
be close. To build the leaf set, node n routes a special “join” message via k to a node c (with
the numerically closest node-ID to n) and retrieves the leaf set of c for itself. All nodes which
are forwarding the join message to c, provide n with their routing information and allow n to
construct its routing table. When the table is constructed the node sends its node state to all
notes in its routing table so they can update their own routing information.
A node failure is detected when a communication attempt with another node fails. The
aliveness of the nodes in the routing table and leaf set are examined automatically since
the routing procedure requires contacting these nodes. The neighborhood set is not
involved in routing and therefore the nodes need to be tested periodically. Failed nodes
need to be deleted and replaced. The replacing procedure is handled by contacting another
node, that has similar characteristics with the failed node, and retrieving replacement
information from it [3][23].
MDC is especially helpful in case of unreliable transport channels and the growing interest
in voice, image and video communications over the Internet. For example, the loss of one
packet can lead to the loss of a large number of source samples and hence in an
interruption of the stream. But with MDC there won’t be any interruption, only variations in
the stream quality [9][10].
In the next few sub-sections there will be an introduction into four different multiple
description coding schemes. All presented coding schemes are based on the video codec
standard H.264/AVC. The codec was developed by the ITU-T Video Coding Experts Group
(VCEG) together with the ISO/IEC Moving Picture Experts Group (MPEG). It is a block
based motion estimation codec standard with the capability to provide good video quality at
substantially lower bit rates than previous standards and having enough flexibility to be
applied to a wide variety of applications on a wide variety of networks and systems,
including low and high bit rates, low and high resolution video, broadcast, DVD storage,
RTP/IP packet networks, and ITU-T multimedia telephony systems and therefore the most
suitable video codec for video streaming over the internet. More details about the codec
can be found in this paper [11].
According to the definition in [12], MDC schemes can be grouped into two sets: The first
group exploits the spatial correlation within each frame of the sequence when creating the
descriptions. The second group takes advantage of the temporal correlation between the
subsequences obtained by the temporal sampling of the original video sequence.
In this MDC scheme the block-based motion vector fields are split into two parts using a
enhanced version of the quincunx sampler The quincunx sampler generates the two parts
by splitting the motion vectors of adjacent blocks into two disjoint sets. In quincunx
subsampling every other block is removed from each line in the frame. The two sets are
transmitted individually to the decoder as a single description. Each description includes
some unique specific information but also some partial information which has to be
duplicated on both descriptions in order to reconstruct the original signal.
Simulation results show that the MDC scheme reduces a large amount of data without
causing serious quality loss of the reconstructed video, compared to simply duplicated
bitstream transmission [12][13].
Table 1: Qualitative comparison of MDC coding techniques (Table taken from [12])
Tunability no no no yes
Cinepak D3 -
MJPEG D, E4 -
RGB D, E -
YUV D, E -
VCM D, E -
IBM HotMedia D -
Cinepak D -
H.261 D R
H.263 D, E R,T
RGB D, E -
1.read indicates the media type can be used as input (read from a file)
2.write indicates the media type can be generated as output (written to a file)
3.D indicates the format can be decoded and presented.
4.E indicates the media stream can be encoded in the format.
5.R indicates that the format can be decoded and presented.
6.T indicates that media streams can be encoded and transmitted in the format.
provides developers with a much simpler programming interface. It allows playing the most
common formats and codecs (ogg, mp3, m4a, divx, xvid, h264, mov, avi, etc.) thanks to the
FFMPEG library. The downside, it provides no encoding mechanism.
JFFMPEG is another open source project built on FFMPEG. JFFMEPG is a plug-in that
allows the playback of a number of common audio and video formats. A JNI wrapper allows
calls directly into the full FFMPEG library. But again there is only support for decoding of
video streams, there’s no encoding of video streams.
media codecs and types, again through wrapping around native libraries (like the FFMPEG
library) but encoding of media with new codecs like H.264/AVG is still in development in this
framework.
2.3.5 Summary
Most of the discussed projects like Fobs4JMF, JFFMPEG, FMJ and VLC support decoding
of today’s available video codecs, but only a few of them have the right encoding
capabilities to support a large variety of codes. Sun’s JMF offers a lot of functionality and is
included in other projects like Fobs4JMF and JFFMPEG, but has limited codec support.
Table 3 gives an overview of the available projects for video handling in Java.
H.264 H.264
Project Decoding Encoding
Playback Encoding
network. This section gives some examples for related work in the area of P2P video
streaming. A comparison of the different solutions is given in Table 4.
sender so that the sender is able to determine the next packet to be sent. The sender
chooses the next packet by using the packet partition algorithm.
A drawback of FEC is that it results in bandwidth expansion; it induces a small overhead
and hence reduces the amount of available bandwidth for the actual video stream. The
system has to make a trade-off between redundancy of the data and an efficient use of the
available bandwidth.
2.4.3 SplitStream
SplitStream [8] is a high-bandwidth content distribution system based on end-system
multicast, which also includes video streaming. Some P2P streaming architectures use a
tree for video distribution. A node in the tree forwards its incoming video stream to all its
child nodes. The idea behind SplitStream is to spit the content to be distributed into several
substreams, called stripes. Every stripe has its own tree structure and is using multicast for
stream distribution. SplitStream doesn’t use one tree, but several tree structures. A peer
that wishes to receive a certain stripe must join a specific tree network. To create the
different stripes a technique called multiple description coding can be used, described in
Section 2.2.
Scribe [5] is a scalable application-level multicast infrastructure system which is built upon
Pastry. To create a multicast group Scribe generates a random Pastry key known as the
group id. The multicast tree associated with the group is formed by the union of Pastry
routes from each group member to the group id’s root. By using reverse path forwarding the
messages are multicast from the root to the individual members. SplitStream is the further
development of Scribe with focus on video distribution.
A simple example how SplitStream works is given in Figure 1. The original content from the
source is split into two stripes. Each stripe then builds its own multicast tree such that a
peer is an interior node in one tree and a leaf in the other.
In other P2P tree network constructions there is the problem of leaf nodes, which do not
have any child nodes and don’t need to forward any resources. The burden of forwarding
multicast traffic is only carried by the interior tree nodes. The challenge in SplitStream is to
construct a forest of multicast trees such that a peer is a leaf node in one network and in all
the others he acts as an interior node and has to forward the stream. A peer can choose
how many stripes he wants to receive according to its bandwidth but he also has to
contribute the same amount of data he is receiving. A drawback of this design is that there
is no guarantee of finding an optimal forest, even if the network has sufficient capacity.
2.4.4 CoolStreaming/DONet
DONet [21] is a Data-driven Overly Network for live media streaming. Every peer in the
DONet network exchanges periodically data availability information with a set of partners.
Missing information or unavailable data element can be retrieved from one or more partner
peers. Every peer functions as data supplier to its partner peers.
The idea behind DONet’s data-driven design that it should be easy to implement (no
complex global structure to maintain), efficient (data forwarding is dynamically determined
according to data availability) and, robust and resilient (the partnerships enable adaptive
and quick switching among multiple suppliers).
The IP address is used as DONet’s unique node identifier. A new node first contacts the
origin node, which randomly selects a deputy node and redirects the new node to the
deputy. From the deputy the new node obtains a list of possible partner nodes and tries to
establish a partnership with them in the overlay. The origin node of the video stream is
persistent during the lifetime of the stream.
The system consists of three key modules: membership manager, partnership manager
and the transmission scheduler. The membership manager maintains a partial view of other
overlay nodes. Every node sends periodically a membership message to its partner nodes
to announce its existence. The partnership manager establishes and maintains the
partnership with other nodes by continuously exchanging its Buffer Map with the partners.
The Buffer Map represents the node’s buffer, containing the current available video stream
segments. The last component, the transmission scheduler, schedules the transmission of
video data. Thanks to the partnership manager, a node is always aware of its partner’s
video segments. The transmission scheduler has to decide from which partner to fetch
which data segment. Each node can be either a receiver or a supplier of a video stream
segment, or both.
The DONet system doesn’t have a parent-child relationship between the nodes. Every node
that needs something can request a data segment from one of its partners that claims to
possess it. A problem is the large overhead that this system induces since every peer has
to inform its partners periodically about the current buffer content.
CoolStreaming is the Internet-based implementation of DOTNet and was first released in
2004 and has been used to broadcast sports programs.
2.4.5 PULSE
PULSE [22] is a P2P system for live video streaming and is designed to operate in
scenarios where the nodes can have heterogeneous and variable bandwidth resources.
The PULSE system places nodes in the network according to their current trading
performances. This means nodes with rich resources are located near the source to be
able to serve a large number of neighbors with more recent data. Whereas nodes with
scarce resources would only slow down the system if they are placed near the source.
Nodes are able to roam freely in the system and allowed to react to global membership
changes and local bandwidth capacity variations over time. PULSE uses the data-driven
and receiver-based approach.
PULSE forms a mesh based network structure with trading and control links between the
peers. Every node is free to exchange control information and data for the stream with
peers of its choosing. The main criteria for choosing partners are the topological
characteristics of the underlying network, the resources available at each peer and the
chunk distribution/retrieval algorithms executed by the nodes.
The video stream encoded at the source is split into a series of chunks. To make the system
more resilient to chunk loss the source may apply data fixed-rate error correction codes
(FEC) or other forms of encoding, such as MDC. All chunks are numbered. This allows the
peers to reconstruct the initial stream. A PULSE peer has three main components: The data
buffer stores chunks before playback. The knowledge record keeps information about
remote peer’s presence, data content, past relationships and current local node
associations. The third component, the trading logic, whose role is to request chunks from
neighbors and to schedule the packet sending.
.
Table 4: Comparison of Video Streaming Solutions
Distributed
Video
CoopNet SplitStream DONet Pulse
Streaming
with FEC
Application
The application is built on top of the framework. It basically has to initialize the framework
components (P2P and MDC Manager) with the right parameters and to shut them down on
application closing. The GUI will access the framework components and data through the
application interface since the application manages the two active instances of the P2P and
MDC Manager.
Communication Module
The Communication Module handles all direct communication aspects between the peers
over TCP or UDP. This component has no access to the peer-to-peer overlay network. The
reason is the modular design of the framework and the knowledge that direct connections
are more efficient and reliable than sending messages over a peer-to-peer network.
The main functionality of the component is to request a certain video substream
(description) from other peers and negotiate the terms of the stream sharing. The request
sending peer is asking if the source peer is willing to upload the requested substream.
A peer leaving the network or stops participating in the current video stream distribution
needs to send a disconnect signal to is neighbor peers so they can immediately start
looking for a replacement. Especially if the disconnected peer was source. This means the
module needs to be capable of handling different communication protocol.
Another task is to check if the connected peers, that are currently uploading or downloading
a stream, are still alive. A peer has to verify periodically that its connected peers are still
alive by exchanging messages. If a peer does not answer, the connection to the peer will be
terminated. This is necessary in the case one of the current connected peers suddenly fails
and needs to be replaced with another source peer.
The Communication Module is closely coupled to the Connection Manager, since the
Connection Manager maintains the lists with the current incoming and outgoing
connections.
Connection Manager
The Connection Manager maintains four different lists (cf. Figure 4). The first list is the
Channel List. This list contains all current available video channels in the network. Each list
entry consists of the channel name and the channel key. The key is needed to create a
lookup for possible channel sources in the DHT. The list is filled by the P2P Controller
component.
The second list, the Source List, holds all the available sources for the current playing
channel. As soon as the peer chooses to watch a certain channel, the P2P Controller
creates a lookup for the channel sources and fills in the Source List. The list only contains
the available sources for the current playing channel. If the peer switches the channel, the
list content will be deleted and replaced. Each entry holds the IP address and the port
number of a source peer and the identifier of the description he offers.
The third list, the Download List, contains the elements from which the peer is currently
receiving a part of the video stream (description). This list is managed by the
Communication Module. As soon the Communication Module has successfully negotiated
a description, the sender’s connection information will be added to the list.
The last list, the Upload List, keeps track of the current active stream upload connections,
the connections to which the peer is forwarding the video data stream. This list is also
updated by the Communication Module since it handles all incoming requests.
P2P Controller
The P2P Controller is the main component in the P2P Manager. Its task is to initiate all the
other components and to provide the necessary control mechanisms. The connection to the
underlying overlay network is also set up by the P2P Controller.
When a peer has to make a lookup call or add some new data in the DHT, the P2P
Controller initiates the calls through the Resource Manager. The results of the lookup calls
will also be handled by the P2P Controller (e.g. updating the lists in the Connection
Manager).
An application built on top of this framework needs an instance of this component to set up
and control the subcomponents of the P2P Manager.
A standard peer that only wants to receive and play a video stream needs only the Player,
Decoder, Joiner, In-Buffer and Out-Buffer components. If the peer acts as a consistent
source for a video channel the Encoder and Splitter components are also used.
Encoder
The main function of the Encoder is the encoding of an incoming video stream. The
Encoder should be able to take different kinds of available video inputs. For example a
possible input could be an installed TV card, a media file or any other possible video
source. The incoming video stream needs fist to be encoded. This is necessary because
sending the raw video stream over the network would take unnecessary high bandwidth
resources. With a suitable compression algorithm the raw bitstream can be encoded and
hence use much less bandwidth. The encoded video stream will then be sent to the Splitter
for further processing.
Decoder
The Decoder has the opposite task than the Encoder. It decompresses the stream into a
format that can be played and displayed on the screen. For decoding is the same video
codec used as in the Encoder. The Decoder receives an encoded video stream from the
Joiner, decodes it and sends the video stream to the player.
Splitter
The Splitter performs the actual video stream splitting. Since multiple description coding is
used, it is needed to split the original stream into several substreams (descriptions). This is
the task of the Splitter. The Encoder sends the encoded original video stream and the
Splitter creates the independent substreams after a specified MDC scheme. Dependent on
the used MDC scheme the two components Encoder and Splitter may be combined into a
single component given that the splitting and encoding task can’t be done independently.
Another task of the splitter it to create the UDP packets to send the video stream over the
network to the target.
Joiner
The purpose of the Joiner is to take several incoming substreams and combine them to the
original stream by using a specific MDC scheme. If the Splitter receives all substreams, the
original video can be reconstructed, otherwise only a part of the original stream will be
available. Again, like the Encoder and Splitter component, the Decoder and Joiner may be
combined into one component, depending on the used MDC scheme.
The Joiner takes the incoming packets from the In-Buffer and, retrieves the video data from
the packets and creates the video stream.
In-Buffer
The In-Buffer listens to incoming packets from the network. When a packet arrives, it will be
added to the buffer. This is needed because the packets may arrive in undefined intervals
and the system needs a constant stream to play the video, especially in the case of live
video streaming. Before the video starts to play, the video stream is pre-buffered. Then a
continuous video stream can be created. Also a packet burst can be easily corrected by the
buffer
Each incoming packet from the network will be copied to the Out-Buffer because each peer
also has to contribute in the video distribution process and stream the video channel to
other peers.
Out-Buffer
The Out-Buffer manages the sending of the UDP packets containing video data. Depending
on the peers, main purpose the Buffer receives UDP packets from the Splitter or the In-
Buffer. If the peer acts as a streaming server, the Out-Buffer receives new packets from the
Splitter, otherwise from the In-Buffer. The Out-Buffer has access to the Connection
Manager to determine each packets correct target destination since the Connection
Manager maintains the list with all the incoming and outgoing connections. A successfully
sent packet will be deleted from the buffer.
Player
The Player receives an incoming video stream from the Decoder and has now the job to
display the decoded video stream on the screen. It also provides user interface capability,
such that a user may interact with the video stream, like resizing the video window or
muting audio stream.
MDC Controller
The MDC Controller is the main component in the MDC Manager. Its task is to initiate all the
other components and to provide the necessary control mechanisms. It also offers the
interface for accessing all other modules in the MDC Manager.
Communication Module
As discussed in Subsection 3.3.1, this component handles the direct communications
aspects between the peers. The current implementation can handle both connection types
TCP and UDP. There is an open port for incoming TCP connections and also another port
for incoming UDP packets.
The port used for incoming TCP socket connections is defined as parameter in the
framework configuration file. The default port value is set to 14100. Every incoming TCP
socket connection will be handled in a separate thread since it is more efficient to handle
several connections at the same time than processing one after the other. The maximum
number of allowed incoming connections is also defined in the configuration file. If a peer
has too many ongoing connections at a time, it sends a “Too-Many-Connections” message
to each incoming connection request. This helps the request sending peer to distinguish if
he should try to establish a connection later on or mark the peer as dead. The first message
sent by the connection requesting peer determines the protocol to use in this connection
(e.g. protocol for a stream request, protocol for peer disconnect, etc.). A connection
remains active as long as the end of the protocol hasn’t been reached. The component is
implemented in such a way that in the future additional protocols can be added to the
system.
Instead of only using TCP, the component also may send and receive UDP packets through
an open UDP port. The UDP socket is also used to send messages between the peers but
with less connection overhead than TCP and no need for a continuous open connection
stream. In the current implementation UDP messages are used to verify that the source
peers, from whom a peer is receiving a video stream, are still alive.
The Communication Module is constantly monitoring the Download List in the Connection
Manger to check if that there are enough active sources for the current active video stream.
If this is not the case, the module requests the address of a new possible source from the
Connection Manager. The Connection Manager determines the next source address by
using a specific algorithm (the algorithm will be discussed later on in Section 3.6) and
returns the result, a new possible source, to the Communication Module. Then a new
connection will be established over TCP to the possible source peer and the protocol for a
video stream request will be initialized. If the negotiation of the stream request is successful
then the new source will be added to the Download List otherwise the system is looking for
the next possible source.
Connection Manager
The Connection Manager maintains the four lists (Channel List, Source List, Download List
and Upload List). The implementation of this component needs to be thread safe since
different components may want to simultaneously access one of the integrated lists. The
Out-Buffer accesses the Upload List to determine the destination of the UDP video packets.
The Communication Module updates the Download List, Upload List and Source List by
adding or removing connection information of other peers. The P2P Controller updates the
Channel List and the Source List if necessary.
The Channel List holds all current active video channels. Each entry consists of the channel
name and the channel id. The channel id is the Pastry lookup key to find the list with the
channel sources in the DHT. The other three lists contain connection information to other
peers. An entry comprises of the IP address and port numbers for TCP and UDP
connections.
P2P Controller
Besides the tasks already mention in the Design section the P2P Controller also creates a
new Pastry node and tries to connect to an existing Pastry ring when the address and port
number of a bootstrap node is given. If the connection to the ring fails, the node starts its
own ring.
is used to handle the encoding and decoding of the video stream. The application used is
the VLC media player, serving as a video stream server as well as a video player. A peer
sends and receives video data from the VLC media player trough an open UDP socket.
Splitter
The Splitter has an open UDP socket and is waiting for incoming UDP packets from the
VLC media player since the source is in an external application. The Splitter is only needed
if the peer decides to act as a streaming server. From the VLC media player the Splitter
receives the UDP packets in the correct order and adds to each packet a sequence
number. The number is continuously increasing. The sequence number is needed to
reconstruct the correct order of the packets at the receiver side to play the video stream. To
every UDP packet the Splitter also adds a description id number. The packets with the
same id produce a single description. The modified packets will then be added to the Out-
Buffer and sent to their target peers.
Joiner
The Joiner is the opposite of the Splitter. Instead of adding new data to the packets the
Joiner removes the description id and the sequence number. The result is the original sized
UPD packet created by the VLC media player. The Joiner sends the cleaned UDP packets
trough an UDP socket to the VLC media player that is going to play the video stream.
The Joiner grabs new packets from the In-Buffer. The buffer is sorting the packets
according to their sequence number. Each packet removed from the buffer has to be
examined. If the sequence number of the packet is greater than the number of the previous
packet, the packet will be sent to the VLC media player since the packets are in the correct
order. Is the sequence number smaller than the previous one, the packet needs to be
discarded because the time frame the packet represents has already passed. A live video
stream is time sensitive and packets that have arrived too late are no longer of use.
In-Buffer
The In-Buffer listens on an open socket for incoming UDP packets. The buffer is
implemented as a priority queue, sorting its packets according to their sequence number.
The packet with the smallest sequence number is always on the head of the queue and the
one with the biggest number on the tail of the queue. This makes it easy for the Joiner to
retrieve the packets in the correct order since the next suitable packet is always on the
head.
Out-Buffer
The Out-Buffer is implemented as a simple first-in-first-out (FIFO) buffer. The packet that is
the oldest in the buffer will be sent first to its destination target.
In the Upload List (part of the Connection Manager) are all peers listed to which ones the
peer has to send the packets that are currently in its Out-Buffer. The list doesn’t only contain
the connection information but also the id of the currently subscribed description. Together
with the description ID stored in every UDP packet, every packet can be mapped to its
correct receiver. The Out-Buffer creates a copy of the packet adds the correct address and
then sends it over the network to the target.
MDC Controller
The MDC Controller launches the external VLC media player as soon as one is needed.
The input parameters for the media player will be set automatically by the MDC Controller.
If the peer receives all descriptions streams (packets) it can watch the video stream in full
quality. Otherwise, if the peer receives only a part of the streams, the video will be played
with some small artifacts.
source peer accepts the request then the source peer will be added to the Download List
(which contains all the peers serving as active sources).
The algorithm runs and tries to find additional resources as long as there are possible
source peers in the Source List and the current peer does not receive the complete set of
the necessary descriptions.
Algorithm in detail:
1 The algorithm first determines how many descriptions that need to be gathered to
receive the full video stream of the desired channel. In the current implementation there
is a global parameter providing this information. A list containing all description IDs is
created.
2 The next step is to compare the description ID list, created in step 1,with the description
IDs already active in the Download List since only one source for each description is
needed.
3 After the first two steps the algorithm knows which descriptions the peer is already
receiving and which ones are still missing. For every missing description the algorithm
chooses at random a possible source from the Source List.
4 The peer now tries to establish a connection to the possible source peer and sends on
connection success a description request message.
5 There are two possible next steps:
5.1 The source peer is unreachable or denies the request so it will be removed from
the Source List.
5.2 The peer accepts the request and is now an active source. The peer will be
added to the Download List as an active source.
6 Stop the algorithm if there are no more available sources or the peer is receiving all
descriptions. Otherwise start again with step one.
The current algorithm chooses the sources randomly. This approach works to test the
framework but it is not an efficient solution. A more improved algorithm would have to
consider the underlying network structure, such as the geographical location of the peers,
the round trip time, the reliability of the connection, packet loss, and also the available
bandwidth capacities of the peers.
leaving the network, uses a protocol to send a disconnect message to all its current
connections and informing them, they are going to lose current subscribed substream.
Every peer normally monitors its active connections and would automatically realize a
change, but it would take some time. It is more efficient to directly send a message to the
connected peer so that they can instantly adapt to the change and look for a new source
before it has a noticeable impact on the video stream.
3.7.3 Protocols
The current implementation requires two protocols for the direct communication between
the peers. The first protocol is used to negotiate a substream (description) request (cf.
Figure 9). Every incoming description request is first evaluated with the following criteria:
• Does the current active channel name match the requested channel name?
• Is the requested description available?
• The total amount of active connections is not reached.
• Does the connection to the peer already exist?
After the evaluation of the request the asked peer sends an Accept-Message and starts
sending the UDP packets. If the peer declines an Decline-Message will be sent.
The second protocol is used to send a Stream-Disconnect-Message (cf. Figure 10) to all
current connected peers to inform them that this source peer is about to shutdown or switch
the channel.
4 Performance Evaluation
This chapter gives information about the evaluation of the implemented application. It
shows which parts are working and where in the implementation are open problems.
4.2 Problems
This section gives an overview of open problems or problems whose solutions did lead to
design changes.
information and the lookup-message arrives at the new peer, then the resulting return-
message contains only a part of the stored information since the new peer does not have all
the data.
To solve this problem every new peer that joins a neighborhood set that stores information,
needs to receive all the already inserted data from its neighbors. The attempted solution for
this problem was to use the reinsert-functionality of Past. The reinsert method takes as
argument the key of an already stored content object, creates a local lookup for that object
and reinserts the content object in the Past network by updating all peers which are storing
the specific object. This would update all replicas and hence also supply a new peer with
the already stored data in the DHT. As soon as peer finds a new neighbor in its
neighborhood set the reinsert method would be initialized. But the attempted solution failed
because Past was unable to reinsert the data. The Past API was always able to first retrieve
the stored data but subsequently unable to reinsert it. For unknown reasons, Past always
did lose the reinsert-messages somewhere in the Past implementation. Another attempt by
using the same methods that are currently used to register new channels or sources in the
network, did suffer from the same effect. Past seems to have a problem when a peer p tries
to reinsert an object that p is managing.
4.2.2 MDC
The proposed MDC scheme in Section 3.5 is not suitable enough to make the video stream
more error-resistant. Small losses in the video stream transmission already have a high
impact on the stream quality. If only 5% of the UDP packets are missing the video shows
artifacts or a considerable amount of blur. During the test were a standard number of twenty
descriptions used. If only one description did go missing the effect on the video stream was
severe. An example is given in Figure 11 and Figure 12. The video was encoded with the
H.264 video coded at a bitrate of 384 kb/s. The first image (cf. Figure 11) shows the video
with full quality, where the player did receive all descriptions. The second image (cf.
Figure 12) shows the video with 95% of all descriptions where 5% were missing.
The implemented MDC scheme is not able to make the video stream more resistant to
sudden peer failures. A peer that had lost one of its sources and was not able to find a
replacement in time did suffer from noticeable video errors.
5 Future Work
The implemented Past DHT allows storing and retrieving framework related information in
the network. But has some open issues as pointed out in the previous chapter. New peers
in the network do not receive the stored data in the DHT from its neighbors. A solution
would be to transmit the data to the new peer over the Communication Module instead of
using message routing in Past. Another solution; instead of storing the data in lists, Past
stores each DHT entry separately. The lookup mechanism no longer would search for a
single entry (list) but would look for the data on several peers and not just return the first
found entry. Because of the modular design it would also be possible to switch Past with
another DHT implementation that hopefully would provide a better data consistency.
Another missing functionality in the DHT is a cleanup mechanism. Peers that are correctly
leaving the network already clean-up the DHT by removing their proper additions. But peers
that get disconnected because of a network failure, leave behind their entries in the DHT. In
a while the DHT gets filled with inaccurate information. A possible approach to solve this
problem would be: As soon as a peer detects a DHT entry that points to a stream source
that no longer exists, it sends a message into the Past network that marks the element as
faulty. If other peers also mark the same element, it will be removed.
As explained in Subsection 4.2.2, the implemented MDC scheme does not make the
system more error-tolerant, in case some of the descriptions go missing when a peer is
suddenly disconnecting. Another MDC scheme should be used that offers better results in
the case of a failure. Some examples are already given in Section 2.2 and a helpful
overview of different schemes can be found in [9].
At the moment is no solution implemented to overcome Network Address Translation (NAT)
issues. Peers that are behind a NAT firewall or router cannot connect to peers in the
Internet. Propositions for the solution of this problem can be found in [24].
Currently an external application is used to encode and decode, also to play and create the
video stream. In a future implementation the functionality of the VLC media player could be
included into Java. This would make the application more user-friendly if all components
could be included into a single application.
The implemented algorithm for choosing a description source could be improved in a future
version. At the moment a new source is picked randomly form the source list. An improved
algorithm would take other network properties into consideration when choosing a source:
What is available bandwidth of the source peer? What is the round-trip-time and the packet
loss rate of the path? What is the peer geographical location? All these points should be
taken into consideration.
6 Acknowledgements
I would like to specially thank Prof. Dr. Burkhard Stiller and my supervisors Fabio Hecht and
Cristian Morariu for all their support and help during this work.
References
[21] X. Zhang, J. Liu, B. Li, and T.-S. P. Yum: CoolStreaming/DONet: a data-driven overlay
network for peer-to-peer live media streaming; in Proc. IEEE INFOCOM'05, Miami, FL,
Mar. 2005.
[22] F. Pianese, J. Keller and E. W. Biersack: PULSE, a Flexible P2P Live Streaming Sys-
tem; Proceedings of the Ninth IEEE Global Internet Workshop, Barcelona, Spain,
2006.
[23] R. Steinmetz, K. Wehrle: Peer-to-Peer Systems and Applications; Springer Verlag,
Berlin Heidelberg, 2005.
[24] B. Ford, P. Srisuresh and D. Kegel: Peer-to-Peer Communication Across Network
Address Translators; In USENIX Annual Technical Conference, 2005.
List of Figures
Figure 1 Basic approach of SplitStream .................................................................................14
Figure 2 Framework Overview ...............................................................................................18
Figure 3 P2P Manager ............................................................................................................19
Figure 4 Connection Manager ................................................................................................21
Figure 5 MDC Manager ..........................................................................................................22
Figure 6 Past Content Replication ..........................................................................................26
Figure 7 Actual implementation of the P2P Manager ............................................................27
Figure 8 Multiple Description Coding Example Scenario ......................................................29
Figure 9 Protocol for description request ................................................................................32
Figure 10 Disconnect stream protocol ....................................................................................33
Figure 11 Video playback with all descriptions ......................................................................35
Figure 12 Video playback with one missing description .........................................................36
List of Tables
Table 1 Qualitative comparison of MDC coding techniques ..................................................10
Table 2 List of available video support formats in JMF .........................................................11
Table 3 Video Supporting Projects for Java ...........................................................................12
Table 4 Comparison of Video Streaming Solutions ...............................................................16
Appendix A - Tutorial
A detailed tutorial how to use the application is given in this chapter.
• Bootstrap Address: Enter the IP Address of the bootstrap node to connect to an existing
network. Otherwise enter “localhost” to create a new Pastry ring.
• Bootstrap Port: The port number of the bootstrap node.
• VLC Path: Path to the VLC media player.
Figure A.2 Inform user that a new ring has been created
File streaming: To stream from a file the following steps are needed:
1 Select “File -> Open File” to create a stream from a file.
2 Choose the correct file by using the “Browse” button (cf. Figure A.5).
3 Activate the checkbox “Stream/Save” and press the button “Settings”.
4 In the new window enter the settings as shown in Figure A.6.
.