Search Results

Search Results for: turn

Last year we interviewed Oleg Moskalenko and presented the rfc5766-turn-server project, which is a free open source and extremely popular implementation of TURN and STURN server. A few months later we even discovered Amazon is using this project to power its Mayday service. Since then, a number of features beyond the original RFC 5766 have been defined at the IETF and a new open-source project was born: the coTURN project.

Today we are catching up  with Oleg again to see what’s new and to learn what coTURN is about. ...  Continue reading

As Reid previously introduced in his An Intro to WebRTC’s NAT/Firewall Problem post, NAT traversal is often one the more mysterious areas of WebRTC for those without a VoIP background. When two endpoints/applications behind NAT wish to exchange media or data with each other, they use “hole punching” techniques in order to discover a direct communication path that goes from one peer to another through intervening NATs and routers but not traversing any relays. “Hole punching” techniques will fail if both hosts are behind certain types of NATs (e.g. symmetric NATs) or firewalls. In those cases, a direct communication path cannot be found and it’s necessary to use the services of an intermediate host that acts as a relay for the media or data packets, which typically sits on the public Internet. The TURN (Traversal Using Relays around Nat) protocol allows an endpoint (the TURN client) to request that a host (the TURN server) act as a relay. So far TURN, along with ICE and STUN, has seen little deployment. Now that it is a fundamental piece of WebRTC, it is gaining some momentum. In fact, at the IETF we’re now starting a new effort that will focus on enhancements to TURN/STUN that will be applicable to WebRTC deployments. This new effort is called TRAM (Turn Revised And Modernized), and we’re currently discussing its charter...  Continue reading

Back in April 2020 a Citizenlab reported on Zoom’s rather weak encryption and stated that Zoom uses the SILK codec for audio. Sadly, the article did not contain the raw data to validate that and let me look at it further. Thankfully Natalie Silvanovich from Googles Project Zero helped me out using the Frida tracing tool and provided a short dump of some raw SILK frames. Analysis of this inspired me to take a look at how WebRTC handles audio. In terms of perception, audio quality is much more critical for the perceived quality of a call as we tend to notice even small glitches. Mere ten seconds of this audio analysis were enough to set me off on quite an adventure investigating possible improvements to the audio quality provided by WebRTC.

I wanted to add local recording to my own Jitsi Meet instance. The feature wasn’t built in the way I wanted, so I set out on a hack to build something simple. That lead me down the road to  discovering that:

  1. getDisplayMedia for screen capture has many quirks,
  2. mediaRecorder for media recording has some of its own unexpected limitations, and
  3. Adding your own HTML/JavaScript to Jitsi Meet is pretty simple

Read on for plenty of details and some reference code. My result is located in this repo.

jitsiRecorder

Software as a Service, Infrastructure as a Service, Platform as a Service, Communications Platform as a Service, Video Conferencing as a Service, but what about Gaming as a Service? There have been a few attempts at Cloud Gaming, most notably Google’s recently launched Stadia. Stadia is no stranger to WebRTC, but can others leverage WebRTC in the same way?

Thanh Nguyen set out to see if this was possible with his open source project, CloudRetro. CloudRetro is based on the popular go-based WebRTC library, pion (thanks to Sean of Pion for helping review here). In this post, Thanh gives an architectural review of how he build the project along with some of the benefits and challanges he ran into along the way. ...  Continue reading

A couple of weeks ago, the Chrome team announced an interesting Intent to Experiment on the blink-dev list about an API to do some custom processing on top of WebRTC. The intent comes with an explainer document written by Harald Alvestrand which shows the basic API usage. As I mentioned in my last post, this is the sort of thing that maybe able to help add End-to-End Encryption (e2ee) in middlebox scenarios to WebRTC.

I had been watching the implementation progress with quite some interest when former webrtcHacks guest author Emil Ivov of jitsi.org reached out to discuss collaborating on exploring what this API is capable of. Specifically, we wanted to see if WebRTC Insertable Streams could solve the problem of end-to-end encryption for middlebox devices outside of the user’s control like Selective Forwarding Units (SFUs) used for media routing. ...  Continue reading

WebRTC has made getting and sending real time video streams (mostly) easy. The next step is doing something with them, and machine learning lets us have some fun with those streams. Last month I showed how to run Computer Vision (CV) locally in the browser. As I mentioned there, local is nice, but sometimes more performance is needed so you need to run your Machine Learning inference on a remote server. In this post I’ll review how to run OpenCV models server-side with hardware acceleration on Intel chipsets using Intel’s open source Open WebRTC Toolkit (OWT).

Note: Intel sponsored this post. I have been wanting to play around with the OWT server since they demoed some of its CV capabilities at Kranky Geek and this gave me a chance to work with their development team to explore its capabilities. Below I share some background on OWT, how to install it  locally for quick testing, and show some of the models.

Intel OWT emtions plugin testing

Time for another opinionated post. This time on… end-to-end encryption (e2ee). Zoom apparently claims it supports e2ee while it can not satisfy that promise. Is WebRTC any better?

Zoom does not have End to End Encryption

Let’s get to the bottom of things fast: Boo Zoom!

I reviewed how Zoom’s implements their web client last year.

I’m not really surprised of their general lack of e2ee given that their web client did not provide any encryption on top of TLS or WebRTC’s DataChannel. For reasons we will discuss below, this means they weren’t doing any obvious e2ee there. ...  Continue reading

faetouchmonitor

Don’t touch your face! To prevent the spread of disease, health bodies recommend not touching your face with unwashed hands. This is easier said than done if you are sitting in front of a computer for hours.  I wondered, is this a problem that can be solved with a browser?

We have a number of computer vision + WebRTC experiments here. Experimenting with running computer vision locally in the browser using TensorFlow.js has been on my bucket list and this seemed like a good opportunity. A quick search revealed somebody already thought of this 2 week ago. That site used a model that requires some user training – which is interesting but can make it flaky. It also wasn’t open source for others to expand on, so I did some social distancing via coding isolation over the weekend to see what was possible.

Check it out at facetouchmonitor.com and keep reading below for how it works. All the code is available at github.com/webrtchacks/facetouchmonitor. I share some highlights and an alternative approach here.