All posts by Victor Pascual

Editor note: see the updated version of this post here.

As described in previous posts, WebRTC does not specify a particular signalling model other than the generic need to exchange Session Description Protocol (SDP) media descriptions in the offer/answer fashion.

During the last few months, my friend  Antón Román (CTO of Quobis) and I spent a lot of time with our team figuring out how to manipulate and adapt the SDP’s generated by web browsers to make them compatible with the different server/gateway technologies we’re working with.

As WebRTC makes use of new mechanisms but also existing ones that have seen few  deployment in real networks to date. SDP’s generated by Web Browsers are more complex and contain a number of new attributes that are unfamiliar in SIP or IMS networks. In the following post, Antón analyses the anatomy of a WebRTC SDP, giving a detailed description of what all those lines do. ...  Continue reading

For the last year and a half I’ve been working with a number of customers helping them to understand what WebRTC is about, supporting them in the definition of new products, services, and in some cases even developing WebRTC prototypes/labs for them. I’ve spent time with Service Providers, Enterprise and OTT customers and the very first time I demoed WebRTC to them, after the initial ‘wow moment’ almost all of them complained about the ‘call setup delay’, as in some cases represented tens of seconds. ...  Continue reading

As Reid previously introduced in his An Intro to WebRTC’s NAT/Firewall Problem post, NAT traversal is often one the more mysterious areas of WebRTC for those without a VoIP background. When two endpoints/applications behind NAT wish to exchange media or data with each other, they use “hole punching” techniques in order to discover a direct communication path that goes from one peer to another through intervening NATs and routers but not traversing any relays. “Hole punching” techniques will fail if both hosts are behind certain types of NATs (e.g. symmetric NATs) or firewalls. In those cases, a direct communication path cannot be found and it’s necessary to use the services of an intermediate host that acts as a relay for the media or data packets, which typically sits on the public Internet. The TURN (Traversal Using Relays around Nat) protocol allows an endpoint (the TURN client) to request that a host (the TURN server) act as a relay. So far TURN, along with ICE and STUN, has seen little deployment. Now that it is a fundamental piece of WebRTC, it is gaining some momentum. In fact, at the IETF we’re now starting a new effort that will focus on enhancements to TURN/STUN that will be applicable to WebRTC deployments. This new effort is called TRAM (Turn Revised And Modernized), and we’re currently discussing its charter...  Continue reading

WebRTC is gaining traction, and there are exciting changes underway. The WebRTC Conference and Expo III at the Santa Clare Convention Center November 19-21 will focus on the information you need to deliver WebRTC based solutions in your environment. The Developer track includes six extended workshops where the experts will show you how to optimize your WebRTC development. With topics from a tutorial and training through how to deploy WebRTC, signaling, using the data channel and mobile deployments, these sessions will give you the insights your need to deploy right the first time. In addition, a special session with actual users of the best tools in the industry will let you decide which is best to use for your development. ...  Continue reading

As we discussed in previous posts, the IETF is meeting this week in Vancouver. Lots of interesting discussing including two sessions for the RTCWeb WG; the agenda for the two sessions can be found here. The first session, which was held on Monday, was mainly about updates on the JSEP (Javascript Session Establishment Protocol) specification, DataChannel, and RTP Usage with some interesting discussion on simulcast. During Monday’s session the security drafts [1][2] were also covered, but those unfortunately have not been updated since July and do yet not reflect discussions held during the last IETF meeting. Some preliminary notes from Monday can be found here...  Continue reading

With all the drama of the video codec debate ramping up for a Mandatory To Implement (MTI) decision (previously discussed here and here), hopefully it will be a minor footnote in the history of the WebRTC very soon.  If you had to summarize the possible outcomes, interested stakeholders, and sentiments in one picture, here is what the webrtcHacks team thinks it might look like:

A few notes explaining the diagram- Sentiments of “happy with it”, “fine, I’ll live with it”, or “this crushes all my hopes and dreams” in this case centers mostly around the desire for interoperability.  The case of “VP8 & H.246”, it assumes that all the browser implementations end up fully supporting both codecs, but that they are completely negotiable (and possibly constrainable) on a session-by-session basis for each application using them.  This way non-browser implementations could implement VP8 or H.264 depending on their preference, and be guaranteed interoperability. ...  Continue reading

In the WebRTC standardisation post I mentioned that one of the controversial discussions in the IETF context was the mandatory to implement (MTI) video codec battle for WebRTC. While there are some technical arguments on the topic (i.e this  VP8 vs H.264 – subjective evaluation and this performance comparisons discussion), there is no dispute both are high quality and efficient video codecs. The issue here is all about IPR and licensing as described in this interesting and ongoing discussion: “VP8 vs H.264 – the core issue“. ...  Continue reading

Last week I attended the Illinois Institute of Technology Real-Time Communications (IIT-RTC) Conference in Chicago.  This event has a history of attracting key players from around the RTC world. It features discussion that is distilled down to the key trends and technology challenges in the industry, with very little “fluff” on top.  This year the IIT-RTC conference was co-located with IPTComm as well, adding to the quality of the content.

Topics at the conference touched on many segments of RTC, including IMS, RCS, E-911, OTT, and more.  Our own Victor Pascual sits on the steering committee for the Web and Emerging Technologies track, where WebRTC was given particular focus.   It began with a fantastic WebRTC tutorial from Alan Johnston (co-author of the SIP specification and a dozen other IETF RFCs) and Dan Burnett (co-editor of the W3C WebRTC specification).  They are also both co-authors of “WebRTC:  APIs and RTCWeb Protocols of the HTML5 Real-Time Web”, and provided a fantastic expert introduction to WebRTC APIs and methodologies.  This set the tone for lots of excellent presentation, expert perspective, demonstrations, and discussion on WebRTC over the next few days.  Here are some discussions I found particularly interesting: ...  Continue reading

As discussed in previous posts, the mission of the W3C WebRTC WG is to define client-side APIs to enable Real-Time Communications in Web-browsers. At a very high-level overview, there are three main steps to be taken when setting up a WebRTC session:

  • Obtain local media – provides access to local media input devices such as webcams and microphones
  • Establish a connection between the two browsers – peer-to-peer media session between two endpoints, including any relaying necessary, allowing two users to communicate directly
  • Exchange Media or Data – allows the web application to send and receive media or data over the established connection

The getUserMedia() method is generally used to obtain access to local devices and it requires user permission before accessing the device. In this post, John McLaughlin, Eamonn Power and Miguel Ponce de Leon from openRMC will be looking more closely at the getUserMedia() method, and how to deal with its outputs in order to give some meaningful feedback to the developer, and ultimately the end user. More concretely and quoting their own words: ...  Continue reading

As I described in the standardization post, the model used in WebRTC for real-time, browser-based applications does not envision that the browser will contain all the functions needed to function as a telephone or video conferencing unit. Instead, is specifies the browser will contain the functions that are needed to run a Web application which would work in conjunction with back-end servers to implement telephony functions as required. According to this, WebRTC is meant to implement the media plane but to leave the signalling plane up to the application. Different applications may prefer to use different protocols, such as SIP or something custom to the particular application. In this approach, the key information that needs to be exchanged is the multimedia session description, which specifies the configuration necessary to establish the media plane. In other words, WebRTC does not specify a particular signalling model other than the generic need to exchange SDP media descriptions in the offer/answer fashion. However, the browser is totally decoupled from the actual mechanism by which these offers and answers are communicated to the remote side.  ...  Continue reading