- Anatomy of a WebRTC SDP
- AIY Vision Kit Part 1: TensorFlow Computer Vision on a Raspberry Pi Zero
- How to Figure Out WebRTC Camera Resolutions
- Computer Vision on the Web with WebRTC and TensorFlow
- Reeling in Safari on WebRTC – A Closer Look at What’s Supported
- An Intro to WebRTC’s NAT/Firewall Problem
- Guide to WebRTC with Safari in the Wild (Chad Phillips)
webrtcH4cKS: ~ Part 2: Building a AIY Vision Kit Web Server with UV4L
In part 1 of this set, I showed how one can use UV4L with the AIY Vision Kit send the camera stream and any of the default annotations to any point on the Web with WebRTC. In this post I will build on this by showing how to send image inference data over a WebRTC dataChannel and render annotations in the browser. To do this we will use a basic Python server, tweak some of the Vision Kit samples, and leverage the dataChannel features of UV4L.
To fully follow along you will need to have a Vision Kit and should have completed all the instructions in part 1. If you don’t have a Vision Kit, you still may get some value out of seeing how UV4L’s dataChannels can be used for easily sending data from a Raspberry Pi to your browser application.