raspberry pi

All posts tagged raspberry pi

In part 1 of this set, I showed how one can use UV4L with the AIY Vision Kit send the camera stream and any of the default annotations to any point on the Web with WebRTC. In this post I will build on this by showing how to send image inference data over a WebRTC dataChannel and render annotations in the browser. To do this we will use a basic Python server,  tweak some of the Vision Kit samples, and leverage the dataChannel features of UV4L.

To fully follow along you will need to have a Vision Kit and should have completed all the instructions in part 1. If you don’t have a Vision Kit, you still may get some value out of seeing how UV4L’s dataChannels can be used for easily sending data from a Raspberry Pi to your browser application. ...

Continue Reading

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+

A couple years ago I did a TADHack  where I envisioned a cheap, low-powered camera that could run complex computer vision and stream remotely when needed. After considering what it would take to build something like this myself, I waited patiently for this tech to come. Today with Google’s new AIY Vision kit, we are pretty much there.

The AIY Vision Kit is a $45 add-on board that attaches to a Raspberry Pi Zero with a Pi 2 camera. The board includes a Vision Processing Unit (VPU) chip that runs Tensor Flow image processing graphs super efficiently. The kit comes with a bunch of examples out of the box, but to actually see what the camera see’s you need to plug the HDMI into a monitor. That’s not very useful when you want to put your battery powered kit in a remote location. And while it is nice that the rig does not require any Internet connectivity, that misses out on a lot of the fun applications. So, let’s add some WebRTC to the AIY Vision Kit to let it stream over the web. ...

Continue Reading

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+

As WebRTC implementations and field trials evolve, field experience is telling us there are still a number of open issues to make this technology deployable in the real world and the fact that we would probably do some things differently if we started all over again. As an example, see the recent W3C discussion What is missing for building (WebRTC) real services or Quobis‘ CTO post on WebRTC use of SDP.

Tim Panton, contextual communications consultant at Westhawk Ltd,  has gone through some of these issues. During the last couple of years we had the chance to run some workshops together and have some good discussions in the IETF and W3C context. Tim’s expertise is very valuable and I thought it would be a good idea to have him here to share some of his experiences with our readers. It ended up as a rant. ...

Continue Reading

Tweet about this on TwitterShare on FacebookShare on LinkedInShare on Google+