17 comments on “Part 2: Building a AIY Vision Kit Web Server with UV4L

  1. Hi,

    I’m running into an error when trying to run the server.py script:

    (env) pi@raspberrypi:~/AIY-projects-python $ sudo ../../../../env/bin/python3 server.py

    Traceback (most recent call last):
    File “server.py”, line 15, in
    from aiy.vision.leds import Leds
    ImportError: No module named ‘aiy.vision.leds’

    It’s not able to find this module. Is there a step I’m missing somewhere?

    Thanks
    Rachel

    • Do you know what aiy image you loaded? It looks like you are on an older one. They removed the virtual environment and changed some of the library references, including the LEDs. You should download and reflash your SD card with the latest image that has some improvements over the previous ones: https://dl.google.com/dl/aiyprojects/aiyprojects-latest.img.xz

      My code is working with aiyprojects-2018-02-21.img.xz.

      The old source under the virtual environment use to be:
      aiy._drivers._rgbled import PrivacyLED

      That module just illuminates the privacy LED, so worst case you can comment it out if that is the only thing that is giving you trouble and you don’t care about the light.

  2. I’ve been unable to find any documentation the face object and its methods and attributes (such as joy_score). could you point me in the right direction?

    • The AIY team has not published any docs on this that I know of. I learned enough for my projects just by reviewing their code samples in the repo and looking for related comments in their the repo’s issues section. Most of the samples have a good amount of comments in the code.

  3. Hi!
    Great project! This was a smooth ride so far!
    Any idea how I could go about adding a layer of security, like a login and password page before accessing the stream?
    I would like to set it up as a security camera.
    Also, I notice that only 1 device can access the stream with the setup in part 1. Do you have any pointers as to how to overcome that?

    • Its good to hear you are having success with this. The UV4L Server has a HTTP Basic authentication option – see the manual or just run through he config file to set this. I use this at home just by passing the user name and password as part of the URL. My first thought on improving this would be to setup a webpage that does some user authentication, then use Flask to mediate between the user authentication and to hide the UV4L credentials so they aren’t exposed to the browser.

      When you say only 1 device can access the stream, I assume you mean only 1 WebRTC stream at a time? I believe best practice there is to use a media server like Janus or Jitsi for that. I know Janus runs well on a Pi 3, but I am not sure if you would have enough CPU to run it on a Pi Zero while doing inference processing too.

      • Thank you for your response! I will try the username, password url method.
        I wonder if we could just record the stream and do the inference processing on the pi and upload the result as a video file on the cloud. This way we can then stream it directly from there. I don’t mind if my real-time has little delay. What do you think?
        Unfortunately, although part I was working like a breeze, part II doesn’t work for me yet. Here’s my setup.
        I am using Windows 10 and Putty. I am also using the latest AIY vision kit (1.1 I think). Joy detector is disabled.

        The script runs and the inference processing seems to work well also (based on the terminal output). Although, when I use Chrome to connect on my local pi(with local IP) and port 5000. All I get is a blank page. The html is the same as described on this page.
        Do I need to stop UV41 process? Why is the host 0.0.0.0 in your script ? Do we need to put our local pi IP address ?

        • If you want to setup a server in the cloud instead of running inference on device, see this post: https://webrtchacks.com/webrtc-cv-tensorflow/

          I ran through the “Just let me try it” instructions above on the new 1.1 kit last week and did not have any issues.

          You are connecting from your Windows 10 machine to the Pi Zero. “http://raspberry.pi:5000” should work in your browser unless you changed your Pi Zero host name and as long as you are on the same LAN.

          What does the Python console and javascript console say when you try to connect?

          The 0.0.0.0 just tells Flask to bind to the localhost on the Pi Zero.

          I can give mine a try with my Win 10 machine tomorrow.

          • I gave this a try again with a fresh install on my 1.1 Vision Kit and did not have any issues. I did notice the websocket connection seems to be way slower now for some reason, but the video feed and annotation worked fine otherwise. I’ll need to investigate the websocket performance issue

  4. Hi Chad!
    Thanks for the update and the link for the tutorial. I will def give it a try too.:)
    I have the webRTC running from the first tutorial. Should I disable it like you suggest in your tweaking section?
    I will run some more tests tonight with part 2.
    Right now, I am using a cronjob to start a tweaked version of face_detection_camera.py script to take a pictures and email it to me but it’s CPU intensive and I am afraid I am just opening sockets without closing them… I looked over your code and it seems to take care of that. Am I right?

    • You don’t need to keep the raspicam service going. That’s not used in part 2. You can disable that or even remove the uv4l-raspicam-extras package. However, that should not cause a conflict unless you connect to the uv4l raspicam somehow – so you don’t have to remove it.

      I only open a single socket.

      One other suggestion – check out the https://motion-project.github.io. You could set this up to snap a picture when motion is detected (with many parameters to choose from) and then run inference on that. That project is very CPU friendly.

  5. Thanks Chad! Your code works perfectly fine! I was going about it all wrong. I was using the uv4l on port 8080 to stream as stated in the previous tutorials. I re-read your comments and went back to your original code, on the LAN. It worked like a charm until it got too CPU intensive(I assume) and crashed.
    Here’s an example of what was printing in loop in the terminal.

    Message from syslogd@raspberrypi at May 11 16:31:40 …
    kernel:[ 780.303823] Internal error: : 11 [#1] ARM

    Message from syslogd@raspberrypi at May 11 16:31:40 …
    kernel:[ 780.304228] Process Thread 0x0xb390 (pid: 1059, stack limit = 0xc2190188)

    But then I re-read the section about Optimizations. You really thought about everything!

    I am very grateful about these blog posts. It got me to read about WebRTC, UV4l and to do some pretty cool tests with the face recognition scripts and live streaming!

    Since my last post, I have a camera that records if there’s 2 faces and take a picture every minutes if there’s only 1 face. I wanted to do a timelapse but that was too heavy for the pi. The files are uploaded to Dropbox after being captured and are being deleted locally. I was thinking of using the Dropbox API and using it as a base to put my UI on it. I got it running in minutes but it’s too slow. Google Drive might be the next winner but I didn’t get so lucky with the Python 3 implementation.
    I will give a try to the Motion project and the other webrthacks tutorial you suggested and run inference on the cloud or on some other machine at home. I wish I could use my old Xbox one lol.

  6. Hey guy, this is very cool project. I got some issues when run the server.py, seems like the uv4l was broken after about one minute. Then the process uv4l’s cpu was over 70% and ram was over 50%. Any ideas?

    • Did you apply the config changes in “Tweaking UV4L”? If the CPU gets overrun the whole thing stops working. If you are on a bad network connection with a lot of packet loss, UV4L will consume more CPU since it will need to work harder to encode the WebRTC stream. If you are going to use this outside of a tightly controlled environment I would recommend using a 640×480 resolution.

      • Thanks! That’s RIGHT, I config the uv4l and tweak it. The webrtc works well. But there still some problems. Such as the delay of the RECT drawed on the web.

        • I did not experience the rectangle drawing delay when I first released the the post, but noticed the issue on the new AIY Kit image. I’ll need to look into that. Make sure to keep an eye on the repo for updates when I get around to do that: https://github.com/webrtcHacks/aiy_vision_web_server

          Or better yet, submit a pull request if you figure it out.

Leave a Reply

Your email address will not be published. Required fields are marked *