Success of Over-The-Top (OTT) Service Providers and Trouble of Operators
Many local/global Over-The-Top (OTT) service providers (YouTube, Netflix, Pooq, Naver, Tving, etc.) have appeared in recent years, providing high quality video services over various kinds of wired/wireless devices, such as PCs, smart phones, tablets, smart TVs and
STBs. Advances in Internet video delivery technologies, from Progressive Download towards HTTP Adaptive Bit Rate (ABR) Streaming, enabled seamless and stable video playback even over IP networks with no guaranteed Quality of Service (QoS). Video quality has also improved significantly from 500 ~ 700Kbps to 2 ~ 7Mbps, contributing to the sharp increase of OTT service users.
This shift in demand, from video content provided by operators to those of OTT service providers, has impacted the sales of operator video services, and produced enormous amounts of traffic on the international transit, backbone networks and the wired/wireless access networks, imposing the burden of increased network costs on the operators. A particular operator in Korea found that 36% of traffic on the international transit in 2012 was YouTube traffic, while another Korean operator reported that 53% of the entire traffic over the mobile communication network in 2012 was OTT video traffic. Operators around the world are trying to minimize the network costs incurred due to OTT traffic, and furthermore, to discover new revenue models by leveraging OTT.
Qwilt Universal Video Delivery Solution: Caching Video Content within the Operator Network
To deliver video content, OTT content providers typically have to build their own CDN networks or use commercial CDN services. The servers of these services are
normally located outside of the operator’s network. Recently, operators started to adopt Transparent Caching (or TIC: Transparent Internet Caching) technology in their commercial networks, substantially reducing traffic and network costs by caching OTT video traffic at the edge of the operator network. The Transparent Caching technology caches popular OTT video content in the operator’s network, and upon receiving the request for content from a user, provides the content that has been cached in the operator’s network, rather than downloading it from the OTT origin server. This method allows operators to save on international transit cost and backbone costs. Users can receive high quality video services in a stable manner with faster response times since content is received from nearby caching servers.
Transparent Caching technology is a strategic solution for the aforementioned network operator troubles. Transparent Caching technology first appeared as an attempt to reduce network costs by caching P2P files. In recent years, OTT video traffic has overtaken P2P traffic spurring the need for new specialized video caching technology with different characteristics and requirements from legacy P2P caching solutions. Qwilt (www.qwilt.com) is one of the specialized video caching vendors that can fulfill this demand.
In this report, we will analyze the Qwilt’s QB-Series Transparent Caching solution.
While existing Transparent Caching technology identifies contents by analyzing the HTTP URL Response messages before caching the contents, Qwilt uses deep video classification techniques on the entire HTTP transaction to identify and cache content. Qwilt’s QB-Series is able to analyze video requests and responses sent over HTTP by receiving a copy of the overall data traffic using an optical tap or port mirroring.The QB-Series, then, creates a unique Content ID (CID) for the requested video file based on a combination of parameters within HTTP such as the URI path, URI parameters and the HTTP header. For each new content request, the QB-Series searches its internal CID database to see if a requested video file is already cached. If the request is the first request for the video, the QB caches the video file via the original client transaction through an optical tap or mirror port. If a request for the video file is received from other users, the QB creates a CID in a similar way, and searches the content DB. A cache hit is made in this case and the QB sends an HTTP 302 Redirect message to the user’s device, requesting the device to download the content from it. The QB then sends a TCP FIN with the origin server’s IP address as the source, so that the user device releases the TCP session from the origin server and stops downloading the file from it. Finally, the user device sends the video request message to the QB, and the system in-turn delivers the video file.
Unlike existing TIC solutions which use generic object checksum hashing techniques to identify content, Qwilt’s QB-Series identifies contents by analyzing complete HTTP transactions. It is also employs a unique Caching Logic that leverages a 302 Redirect mechanism to direct users to retrieve content from the QB-Series in close proximity to their location in place of the origin server. Compared to existing systems, the QB-Series has multiple advantages in terms of network stability, network performance, caching performance, caching efficiency and support of new revenue models.
● Problem Description: Existing caching solutions typically require the integration of disparate caching servers, storage enclosures and switches to provide a workable solution. In addition, these solutions also require Policy-Based Routing (PBR) setting on routers in the existing network for classification and redirection of traffic, or the introduction of expensive Deep Packet Inspection (DPI) equipment. These solutions typically require considerable rack space, consume large amounts of power, and drive operating costs higher due to the management of the large amount of hardware equipment. For example, the configuration of an existing TIC solution is illustrated, on the left side of the following figure, for a solution that handles 10Gbps of forwarding traffic. If the aforementioned 10Gbps system encompasses a full rack then 10 racks are needed to support 100Gbps capacity. It is also disadvantageous for operators to build TIC solutions with new or existing DPI equipment since it requires extensive tests and verifications of interworking between the DPI equipment and the TIC solution before introduction of the system in the commercial network, which makes it difficult to introduce the system immediately in the commercial network. In addition, it places a large processing overhead on the DPI equipment.
● Qwilt’s Solution: Qwilt’s QB-Series creates a Transparent Caching solution within existing networks without requiring any PBR settings or introduction of DPI equipment. Each QB-Series appliance supports acquisition of video content, classification, analysis, delivery and load balancing of video traffic. Qwilt’s dedicated Transparent Caching Logic for Internet video enables a single 2U server to provide analysis of traffic of 20Gbps and delivery of 10Gbps of video traffic. This solution enables operators to save space, power consumption and operating costs. Compared to existing systems, a single rack is sufficient for delivery of 100Gbps videos with the QB-Series.Qwilt is the only TIC vendor solution with a Deep Video Classification engine embedded in software, supporting detailed traffic classification and monitoring without requiring additional DPI gear. Classifying and redirecting traffic through DPI require interworking between the DPI equipment and the TIC servers. Since the QB-Series introduces no inter-vendor interworking issues, it supports immediate introduction of TIC servers in the commercial network.
● Problem Description: Existing Transparent Caching solutions typically use router PBR configuration to redirect all port 80 packets to the TIC server, and receive the HTTP packets between the user devices and the origin servers. Even when the operator attempts to cache only Internet video files, all HTTP traffic is directed to the TIC server for traffic analysis, returning other packets that are not video file packets to the router. In other words, the TIC server must handle even the unnecessary traffic for caching.The problem is that because the router forwards the packets with H/W (packet forwarding ASIC chip) and the TIC server with S/W (CPU), the overall network performance is deteriorated. Because the entire port 80 traffic passes the TIC server, introduction of a TIC server increases the latency of general web traffic. In order to only redirect OTT sites to be cached to the TIC server, additional DPI equipment must be installed which results in increased capital and operational costs.The router PBR configuration scheme makes both the web traffic and the video traffic pass the TIC server. If the TIC server is down, the communication is interrupted not only for the video service but also for general web surfing service. To solve this problem, it is required to duplicate the TIC server, and to install additional L4 switches, which require increased network costs yet again.
● Qwilt’s Solution: Qwilt’s QB-Series leverages an out-of-band insertion model using two logical interface sets with the IP network – a router video delivery interface and a passive optical tap interface used for traffic analysis. All bi-directional packets are delivered to the QB-Series through the optical tap interface. If the QB receives a packet, the Deep Video Classification engine analyzes the HTTP message, sends the request or reply message for the OTT sites to be cached (for example, YouTube, Netflix, Pooq, etc.) to the caching engine, and disregards other non-cacheable packets. If the caching engine receives a request message for a video file which is not cached, the engine receives and caches the video file as a response to the request message from the OTT origin server through the optical tap interface. If the request message is for a video file that is already cached, the engine sends a 302 Redirect message to the user device, requesting it to download the video file from the QB appliance.In other words, the QB-Series only caches and delivers the video traffic of the OTT sites to be cached, and does not handle communications of other general web traffic. Because it does not handle communication of other traffic which is not to be cached, unlike the Inline method (PBR), no deterioration of the overall IP network performance occurs due to introduction of a TIC server.Even if QB is down, the video request message from user devices are not redirected, rather the user devices download the video files from the origin server. As general web surfing traffic is not related with QB, no loss of communication or deterioration of performance will occur.
● Problem Description: Many existing Internet video services are provided by using OTT delivery mechanisms using commercial CDN services. Listed below are the demands and challenges of existing solutions from the aspects of users, OTT content providers and operators:
Users: Users demand TV-grade high quality videos. They want to receive Full-HD Internet video on tablets, smart TVs, as well as PCs and notebooks.
OTT Content Providers: OTT content providers want to expand their subscriber base and to improve earnings per subscriber by providing Full-HD video service. This, in turn, inherently increases the video traffic prompting the content provider to pay more for CDN services.
Operators: In the wired network, the backbone network has a high traffic utlization rate, and the edge and the access network have relatively more available link bandwidth. For example, the capacity of the access links is 100Mbps, but only a few users consume it fully. Operators which started to supply 100Mbps access links to homes 5 ~ 6 years ago looked for other revenue sources other than Internet service fees through this pipe, and enhanced the profit per subscriber by providing IPTV/VoD services. Even with the Internet access and IPTV/VoD service, edge access links are underutlized. It is important for operators to search for additional revenue sources that will utilize the surplus bandwidth of the edge and the access links.
● Qwilt’s Solution: By introducing TIC servers installed at the edge of the IP network, operators can save the backbone network cost and create new revenue opportunities.
Operators: By positioning the TIC server at the edge, operators can deliver Full-HD OTT video contents to users by utilizing idle bandwidth of the edge and access links without increasing load on the backbone network, securing a new revenue model (caching fee) without additional investment in their IP networks.
OTT Content Providers: By providing Full-HD TV-grade video of 5 ~ 7Mbps rather than the existing 1 ~ 2Mbps video service, OTT content providers can expand their subscriber base and enhance customer Quality of Experience (QoE). OTT providers can provide high quality service at a lower cost than using the 3rd party CDN through the TIC server in the operator networks.
Users: Users can view seamless and stable Full-HD video service, resulting in increased loyalty to OTT content providers as well as the operators.
● Problem Description: Early internet video services often used HTTP Progressive Download (PDL) delivery technology. In recent times, online video services have evolved to using HTTP Adaptive Bit Rate (ABR) technologies which generate less traffic and provide superior QoE. Unlike PDL, in which users download the entire video file at one time, ABR divides a file into small chunks of a few seconds in length, and clients download the chunks continuously while watching the video.
As the video delivery method evolves from PDL to ABR, the caching performance of existing TIC servers deteriorates substantially. Existing TIC servers typically hash about 10KB at the start portion of the video file and use it as the CID. The problem is that while PDL requires a single hashing calculation for a file, ABR requires thousands of hashing calculations while watching a video. For example, if a 1-hour video is divided into 2-sec chunks, 1800 hashing calculations are required. If a TIC server provides this cached content to 20,000 subscribers at a time through the ABR method, it requires 36,000,000 hashing calculations per hour and 10,000 per second. With the rapid increase of OTT content providers that adopt the ABR technology, demand for alternative TIC technology that can efficiently handle ABR traffic also grows.
● Qwilt’s Solution: Unlike the object-based method that requires hashing calculation for every chunk, Qwilt identifies contents based on the HTTP transactions, and manages the chunks of a file with a single CID. Because no hashing calculation is required for each chunk request, it provides higher throughput than existing TIC servers when delivering video files through the ABR system. According to Tolly’s test report, Qwilt’s QB-Series provides 5-times higher throughput than existing vendors’ solutions [Tolly, Qwilt QB-100 Transparent Caching and Video Delivery Platform: Performance and Feature Evaluation, 2012].
● Problem Description: Many OTT content providers have shifted to ABR video delivery – in particular, Netflix, the top OTT content provider in the world accounting for 33% of the total Internet traffic in USA, employs ABR streaming to deliver video contents. Netflix ABR delivery utilizes chunk requests (video file pieces several seconds in length) through Byte Range requests as illustrated in the following figure (a). Netflix, as well as several other OTT providers, use a variable start and ending bytes for each chunk requested by their player, different per each user view. As illustrated in the following figure (b), when users A and B are watching the same video, the starting and ending bytes of each chunk are different.
Netflix’ video requesting method decreases the caching capability of existing TIC solutions. Existing TIC solutions typically identify content by hashing about 10KB from the requested file without referencing the HTTP transactions. Not recognizing the fact that the chunks being delivered are part of a larger file, the TIC server recognizes each chunk as a separate file. Because a video file is delivered with different chunk sizes, the TIC server recognizes them as different files, and therefore, no cache hit occurs.
● Qwilt’s Solution: As the QB-Series identifies contents based on the HTTP transactions, it recognizes that chunks being delivered are of a single video file. The QB-Series caches the chunks, and saves them as a single contiguous video file in the same fashion as the original file in the Netflix server. The QB-Series can then deliver the proper chunks of a requested byte range, increasing the caching efficiency substantially.
● Problem Description: When users access YouTube from their mobile devices, the video request pattern is very different from that in the wired PC environment. Unlike in the wired environment, the mobile environment has limited radio resources and in order to minimize the amount of video not watched, the YouTube client downloads about 1/2 of the video file, rather than requesting the entire file at one time. If the user keeps watching the video, the client requests the next 1/4 of the file, and then, the last 1/4. This reduces delivery of unnecessary traffic occurring when the user stops watching the video to watch other video files. Delivering a file partially at the same size as is actually being watched, prevents redundant transmission of unwatched but already received video traffic.The problem is that the video traffic equivalent to 1/2 or 1/4 of the video file varies each time when the same content is downloaded by the same device. In the following figure, a user watches a Wonder Girls’ video on YouTube several times with their Galaxy Note through the KT LTE network. The byte range value requested by the device varies between the requests. Existing object-based TIC servers which identify content by hashing about 10KB from the head of the content file considers each chunk as a different content because the heads of the chunks are inconsistent, generating no cache hits. Operators therefore should not expect to see a major reduction of traffic by introducing previous generation TIC servers using object-based classification.
● Qwilt’s Solution: The QB-Series identifies content based on logic within HTTP transactions, capable of identifying that chunks being delivered belong to a single video file. The QB-Series caches the chunks, and saves them as a single video file in a similar fashion as the original file on the YouTube server. The QB-Series then delivers the chunks of the proper requested range for the request of any byte range, increasing the caching efficiency substantially. This advantage is possible because the Caching Logic is designed to take into consideration that Internet video files can be delivered at any length, as in case of the Netflix video caching example.
Netflix’ Security Framework is composed of user authentication, device authentication and content encryption (DRM: Digital Rights Management) as illustrated in the following figure. The device receives the URL of the requested video file after user authentication (ID/PW) and device authentication (up to 50 devices can be used with a single user account). The device, then, sends the credentials (CTicket) and the Stream ID received at the authentication procedure to the Netflix License Server to acquire the DRM key of the video file. As described above, the device acquires the URL and the DRM key of the content after the authentication procedure. The device, then, sends the content request message to the Netflix Streaming Server, and Qwilt’s QB delivers the cached video file. Since the QB-Series is involved in the video communication only after the actual video request message and not during the authentication procedure, there is no impact on Netflix’ business logic.
cant download
why only company e-mail is allow to register ? why can't register with yahoo or gmail ?