21 comments on “AIY Vision Kit Part 1: TensorFlow Computer Vision on a Raspberry Pi Zero

  1. Why?
    WebRTC two-way Audio/Video/Data Intercom & Recorder
    WARNING! Some browsers do not allow to access local media on insecure origins. Consider switching the UV4L Streaming Server to secure HTTPS instead.

    • QI – the secure origin restriction only applies to sending your camera/microphone/screen-share from your web browser. We are not sending anything from the browser in this use case, so that does not matter. UV4L does have a https option. This post explains secure origins: https://webrtchacks.com/chrome-secure-origin-https/ for background on that.

      • No, that is not entirely true. It also appears, if you – while loading your web page from a secure domain – are trying to do Ajax requests towards an insecure server (maybe localhost or even the RPI). That works nowadays in Chrome, but Firefox complains (can be switched into acceptance mode) and Safari is a NoGo

        • yes, if your web server is running https and you try to get assets over http you will run into issues. The example in this post uses the uv4l web server on raspberry pi, all over http so it is not an issue here.

    • I just tried it and it worked fine for me. I have done this a bunch now and have not had any connection issues. I would check your local network.

  2. Hi Chad; I am using picamera.start_recording() instead of .preview() and can stream the output except I am only getting flickering of green and pink and no bounding boxes. Did you modify the face_detection_camera.py in any way?

    I still have the general stream and face detection stdout text. Just no bounding boxes. 🙁


  3. Any idea if the “bonnet” would work with a Pi3?

    It’d be nice to have more CPU power around the AI co-processor. I see this as the main advantage of the NCS as you can also run multiples NCS sticks on a single computer..

    It’d be great it if the Pi4 has real USB3.

    In any event, I just ordered an AIY Vision kit from Target (of all places!) so I’ll soon be able to compare it to the NCS I’ve been using.

      • I could not get it to work with the Pi3B+ but it does work with a Pi3B, I’m mystified as to why, but the Pi3B+ doesn’t boot if the bonnet is plugged in (even if the flat flex cables are not installed).

        It appears the AIY kit can’t do models larger 256×256 pixels, where as the NCS stick does 300×300 pixel models, perhaps larger, I’ve not tried, but I’m disappointed in the AIY Vision kit for this reason, I was hoping for a bit of a speed up over the NCS but it seems I can’t run the model that I use on the NCS.

  4. Quick question if you don’t mind.

    When I run:
    python3 AIY-projects-python/src/examples/vision/face_detection_camera.py

    I don’t see a window with real time annotation. Which would be really nice to show the kids.

    All I have is a terminal that shows something similar to:
    Iteration #5: num_faces=2
    Iteration #6: num_faces=2
    Iteration #7: num_faces=1
    Iteration #8: num_faces=0
    Iteration #9: num_faces=0
    Iteration #10: num_faces=1
    Iteration #11: num_faces=1

    How do I get that preview with the real-time annotation?

    Thank you.

    • The annotations do not show to a terminal window – you need to have the Pi plugged into a monitor. Remote viewing of the annotations is one reason why I put this post series together.

  5. I have a question. You tested the face_detection_camera.py by playing a video on your ipad and faced it towards the kit so that it can view and detect faces.? Is that correct?

    How can we pass video in the python code to test.?

    • Yes – for ease of testing I pointed the Pi Camera at a video playing on a screen rather than a person.

      The AIY Vision Kit has a method to run inference on a single image. If you wanted to process a video file you could use something like OpenCV or a similar library to pass individual frames from a video through that method.

  6. Hi, I had run your tutorial before and it worked exceptionally. fast forward to today and I’m trying to reload my vision aiy kit and was trying to run thru your tutorial when running the command [email protected]:~ $ sudo apt-get install -y uv4l uv4l-raspicam uv4l-raspicam-extr
    as uv4l-webrtc-armv6 uv4l-raspidisp uv4l-raspidisp-extras
    Reading package lists… Done
    Building dependency tree
    Reading state information… Done
    E: Unable to locate package uv4l
    E: Unable to locate package uv4l-raspicam
    E: Unable to locate package uv4l-raspicam-extras
    E: Unable to locate package uv4l-webrtc-armv6
    E: Unable to locate package uv4l-raspidisp
    E: Unable to locate package uv4l-raspidisp-extras

    is this no longer supported? I hope it is I am very intrigued to continue on with where i was before. Thanks in advance.

    • just an added note I just realized that attempting to get the pgp key returned “no valid OpenPGP data found” would this cause the failure?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.