39 comments on “Part 2: Building a AIY Vision Kit Web Server with UV4L

  1. Hi,

    I’m running into an error when trying to run the server.py script:

    (env) pi@raspberrypi:~/AIY-projects-python $ sudo ../../../../env/bin/python3 server.py

    Traceback (most recent call last):
    File “server.py”, line 15, in
    from aiy.vision.leds import Leds
    ImportError: No module named ‘aiy.vision.leds’

    It’s not able to find this module. Is there a step I’m missing somewhere?

    Thanks
    Rachel

    • Do you know what aiy image you loaded? It looks like you are on an older one. They removed the virtual environment and changed some of the library references, including the LEDs. You should download and reflash your SD card with the latest image that has some improvements over the previous ones: https://dl.google.com/dl/aiyprojects/aiyprojects-latest.img.xz

      My code is working with aiyprojects-2018-02-21.img.xz.

      The old source under the virtual environment use to be:
      aiy._drivers._rgbled import PrivacyLED

      That module just illuminates the privacy LED, so worst case you can comment it out if that is the only thing that is giving you trouble and you don’t care about the light.

  2. I’ve been unable to find any documentation the face object and its methods and attributes (such as joy_score). could you point me in the right direction?

    • The AIY team has not published any docs on this that I know of. I learned enough for my projects just by reviewing their code samples in the repo and looking for related comments in their the repo’s issues section. Most of the samples have a good amount of comments in the code.

  3. Hi!
    Great project! This was a smooth ride so far!
    Any idea how I could go about adding a layer of security, like a login and password page before accessing the stream?
    I would like to set it up as a security camera.
    Also, I notice that only 1 device can access the stream with the setup in part 1. Do you have any pointers as to how to overcome that?

    • Its good to hear you are having success with this. The UV4L Server has a HTTP Basic authentication option – see the manual or just run through he config file to set this. I use this at home just by passing the user name and password as part of the URL. My first thought on improving this would be to setup a webpage that does some user authentication, then use Flask to mediate between the user authentication and to hide the UV4L credentials so they aren’t exposed to the browser.

      When you say only 1 device can access the stream, I assume you mean only 1 WebRTC stream at a time? I believe best practice there is to use a media server like Janus or Jitsi for that. I know Janus runs well on a Pi 3, but I am not sure if you would have enough CPU to run it on a Pi Zero while doing inference processing too.

      • Thank you for your response! I will try the username, password url method.
        I wonder if we could just record the stream and do the inference processing on the pi and upload the result as a video file on the cloud. This way we can then stream it directly from there. I don’t mind if my real-time has little delay. What do you think?
        Unfortunately, although part I was working like a breeze, part II doesn’t work for me yet. Here’s my setup.
        I am using Windows 10 and Putty. I am also using the latest AIY vision kit (1.1 I think). Joy detector is disabled.

        The script runs and the inference processing seems to work well also (based on the terminal output). Although, when I use Chrome to connect on my local pi(with local IP) and port 5000. All I get is a blank page. The html is the same as described on this page.
        Do I need to stop UV41 process? Why is the host 0.0.0.0 in your script ? Do we need to put our local pi IP address ?

        • If you want to setup a server in the cloud instead of running inference on device, see this post: https://webrtchacks.com/webrtc-cv-tensorflow/

          I ran through the “Just let me try it” instructions above on the new 1.1 kit last week and did not have any issues.

          You are connecting from your Windows 10 machine to the Pi Zero. “http://raspberry.pi:5000” should work in your browser unless you changed your Pi Zero host name and as long as you are on the same LAN.

          What does the Python console and javascript console say when you try to connect?

          The 0.0.0.0 just tells Flask to bind to the localhost on the Pi Zero.

          I can give mine a try with my Win 10 machine tomorrow.

          • I gave this a try again with a fresh install on my 1.1 Vision Kit and did not have any issues. I did notice the websocket connection seems to be way slower now for some reason, but the video feed and annotation worked fine otherwise. I’ll need to investigate the websocket performance issue

  4. Hi Chad!
    Thanks for the update and the link for the tutorial. I will def give it a try too.:)
    I have the webRTC running from the first tutorial. Should I disable it like you suggest in your tweaking section?
    I will run some more tests tonight with part 2.
    Right now, I am using a cronjob to start a tweaked version of face_detection_camera.py script to take a pictures and email it to me but it’s CPU intensive and I am afraid I am just opening sockets without closing them… I looked over your code and it seems to take care of that. Am I right?

    • You don’t need to keep the raspicam service going. That’s not used in part 2. You can disable that or even remove the uv4l-raspicam-extras package. However, that should not cause a conflict unless you connect to the uv4l raspicam somehow – so you don’t have to remove it.

      I only open a single socket.

      One other suggestion – check out the https://motion-project.github.io. You could set this up to snap a picture when motion is detected (with many parameters to choose from) and then run inference on that. That project is very CPU friendly.

  5. Thanks Chad! Your code works perfectly fine! I was going about it all wrong. I was using the uv4l on port 8080 to stream as stated in the previous tutorials. I re-read your comments and went back to your original code, on the LAN. It worked like a charm until it got too CPU intensive(I assume) and crashed.
    Here’s an example of what was printing in loop in the terminal.

    Message from syslogd@raspberrypi at May 11 16:31:40 …
    kernel:[ 780.303823] Internal error: : 11 [#1] ARM

    Message from syslogd@raspberrypi at May 11 16:31:40 …
    kernel:[ 780.304228] Process Thread 0x0xb390 (pid: 1059, stack limit = 0xc2190188)

    But then I re-read the section about Optimizations. You really thought about everything!

    I am very grateful about these blog posts. It got me to read about WebRTC, UV4l and to do some pretty cool tests with the face recognition scripts and live streaming!

    Since my last post, I have a camera that records if there’s 2 faces and take a picture every minutes if there’s only 1 face. I wanted to do a timelapse but that was too heavy for the pi. The files are uploaded to Dropbox after being captured and are being deleted locally. I was thinking of using the Dropbox API and using it as a base to put my UI on it. I got it running in minutes but it’s too slow. Google Drive might be the next winner but I didn’t get so lucky with the Python 3 implementation.
    I will give a try to the Motion project and the other webrthacks tutorial you suggested and run inference on the cloud or on some other machine at home. I wish I could use my old Xbox one lol.

    • You can use a raspberry pi 3b or 3b+, just connect the aiy bonnet and the raspberry using a standard camera adapter cable, just cut the 3.3v line.
      I suggest doing the cut on the raspberry side of the cable as the cable is larger and easier to not screw up the cut.

      on the 3b+ be careful of the poe plug.

      Should give you a nice bump in the cpu area.

  6. Hey guy, this is very cool project. I got some issues when run the server.py, seems like the uv4l was broken after about one minute. Then the process uv4l’s cpu was over 70% and ram was over 50%. Any ideas?

    • Did you apply the config changes in “Tweaking UV4L”? If the CPU gets overrun the whole thing stops working. If you are on a bad network connection with a lot of packet loss, UV4L will consume more CPU since it will need to work harder to encode the WebRTC stream. If you are going to use this outside of a tightly controlled environment I would recommend using a 640×480 resolution.

      • Thanks! That’s RIGHT, I config the uv4l and tweak it. The webrtc works well. But there still some problems. Such as the delay of the RECT drawed on the web.

        • I did not experience the rectangle drawing delay when I first released the the post, but noticed the issue on the new AIY Kit image. I’ll need to look into that. Make sure to keep an eye on the repo for updates when I get around to do that: https://github.com/webrtcHacks/aiy_vision_web_server

          Or better yet, submit a pull request if you figure it out.

          • I did some investigation on the annotation delay. It was a problem with UV4L’s socket-to-dataChannel handling and Luca at UV4L fixed it today. Please run the following to fix this:
            sudo apt-get install --reinstall uv4l uv4l-webrtc-armv6 uv4l-raspidisp uv4l-raspidisp-extras

            Add uv4l-raspicam and uv4l-raspicam-extras if you are using those. Things will still get messed up if you overrun the CPU, but the UV4L fix should bring the annotation updates back up to the video framerate.

  7. Thanks for the writeup. I’m trying to output the recognition feed and inference to ffmpeg. Do you think that’s possible?

    • You are looking to save the video with annotations overlaid on it while streaming with WebRTC? The challenge is avoiding simultaneous access to the raspicam, which is why I did the uv4l-raspivid approach in the first place. In theory you should be able to write the uv4l-raspivid feed to disk. I would try some of the comments here: https://raspberrypi.stackexchange.com/questions/43708/using-the-uv4l-driver-to-stream-and-record-to-a-file-simultaneously.

      Another approach would be to just do a camera.start_recording, and then use something like opencv to put the annotations over it later.

      Either way, you will need to be sensitive to your CPU consumption. If are are streaming constantly it might be easier to just use WebRTC to record remotely.

          • Its 3.3v+ on both sides, my vision bonnet is from the first batch before they where pulled from shelves temporary, it is possible the issue has been fixed. Its the version where you have to flip the 22 to 22 pin flex cable and connect the arrow for the pi to the bonnet instead when connecting to a pi zero.

            Mine won’t work without clipping that line.

            I asked google about this however and they did not seem to be aware of any change to fix the issue above, or acknowledge that it was an issue. From the couple of emails I had tho, I very much got the impression that at least the support guys had no clue about the hardware.

            Maybe I will buy another kit and see what is different.

          • I was able to get the AIY Vision Kit to work with a Pi 3 without adjusting any of the cables. Just make sure the cables are connected properly and everything works with the stock SD card image right out of the box.

            AIY Vision Kit on a Raspberry Pi 3

  8. pi@raspberrypi:~$ sudo service uv4l-raspidisp restartsudo service uv4l_raspidisp restart
    Failed to restart uv4l-raspidisp.service: Unit uv4l-raspidisp.service not found.

    • It sounds like uv4l-raspidisp might not be installed. Are you sure it was included in the sudo apt-get install command to install the packages in the “Just Let me Try it” section above? You can see all the available services if you do a sudo service status-all

      • pi@raspberrypi:~$ sudo apt-get install -y uv4l uv4l-raspicam uv4l-raspicam-extras uv4l-webrtc-armv6 uv4l-raspidisp uv4l-raspidisp-extras
        Reading package lists… Done
        Building dependency tree
        Reading state information… Done
        uv4l is already the newest version (1.9.16).
        uv4l-raspicam is already the newest version (1.9.60).
        uv4l-raspicam-extras is already the newest version (1.42).
        uv4l-raspidisp is already the newest version (1.6).
        uv4l-raspidisp-extras is already the newest version (1.7).
        uv4l-webrtc-armv6 is already the newest version (1.84).
        0 upgraded, 0 newly installed, 0 to remove and 113 not upgraded.

        Yes. I installed all the installation packages!

      • I found the problem!
        Your command was wrong:
        What you wrote is: sudo service uv4l-raspidisp restart
        The correct one should be: sudo service uv4l_raspidisp restart
        “_” is not “-”
        Let’s change the content of the article. Lest other people be misled like me!

  9. http://10.197.229.44:5000/
    I access the page through the LAN is blank!

    There is video on the monitor!
    However, no face is identified after the face is recognized

    i@raspberrypi:~$ ls
    AIY-projects-python AIY-voice-kit-python bin Documents drivers-raspi Music Public Templates
    aiy_vision_web_server assistant-sdk-python Desktop Downloads models Pictures python_games Videos

    10.197.229.6 – – [22/Jun/2018 02:36:43] “GET / HTTP/1.1” 200 –
    INFO:werkzeug:10.197.229.6 – – [22/Jun/2018 02:36:43] “GET / HTTP/1.1” 200 –
    10.197.229.6 – – [22/Jun/2018 02:36:44] “GET /static/drawAiyVision.js HTTP/1.1” 200 –
    INFO:werkzeug:10.197.229.6 – – [22/Jun/2018 02:36:44] “GET /static/drawAiyVision.js HTTP/1.1” 200 –
    10.197.229.6 – – [22/Jun/2018 02:36:45] “GET /static/uv4l.js HTTP/1.1” 200 –
    INFO:werkzeug:10.197.229.6 – – [22/Jun/2018 02:36:45] “GET /static/uv4l.js HTTP/1.1” 200 –

  10. Hey there, thanks a lot for sharing your knowledge! Its been a great leap forward to my robotics project.
    I am not a programmer or software engineer, so I depend on good tutorials like the one you did! 🙂
    My understanding of the code only scratches the surface of understanding what is going on here…so my question would be, is it possible to also send the numerical values for the face.bounding_box in tandem with the camera stream?
    I would like to be able to access my robots camera from within Unity, and drive my model with the position of the face.bounding_box. I would use http GET request from within Unity to fetch the values, if possible? Getting the Motion JPEG stream already works great, but it would be great to also have those values available! 🙂

      • With the SampleUnityMjpegViewer I am able to stream the Pi camera to Unity http://192.168.178.60:8080/stream/video.mjpeg

        I am not able to import the http://raspberrypi.local:5000 though!
        Also it seems that I am not able to import http://192.168.178.60:9080/stream/webrtc

        I would like to find a way to send face.bounding_box as numerical values, plus the camera stream to Unity!

        At the moment I am running Blynk local server on my Pi, which works great but I could never wrap my head around how to stream images…so this here was just what I was looking for!

        Let me know if I am on the right way, or if it is a dead end… 🙂

        • You will not be able to use the uv4l WebRTC with Unity unless you setup a WebRTC stack inside Unity and adapt it to use uv4l’s signaling.

          If you have already figured out how to send a video stream, then I would recommend setting up a simple REST API (just using HTTP GETs) to send that information from the Pi to Unity.

          • Thanks a lot for the info, that will safe me the time trying to set this up…so I will use a simple mjpg-streamer to get my camera stream to Unity and sent the data with my REST API!
            But thanks again for the help and for introducing uv4l WebRTC, it has some great features!
            All the other tutorials are also great btw, they are all on my to do agenda!! 🙂

  11. One thing that would be nice, is to get the http://raspberrypi.local:5000 as a video.mjepg, is this possible?
    Then I could just send the values from within the Python script to my REST server, I cannot import to Unity in another form than Motion JPEG…

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.