21 comments on “Computer Vision on the Web with WebRTC and TensorFlow

    • Please do share! My hack here certainly wouldn’t scale well into a service, but the situation should dramatically with a proper GPU setup.

      I have another set of posts coming soon applying a similar technique to an embedded device too.

  1. Pingback: Live Body-Context (063) – Another Idea

  2. Thanks for the very detailed write-up!

    Just to be complete, if you want to process an actual video stream (as opposed to capturing individual images and sending them using an XHR), you could make use of an RTCPeerConnection. On the server side you can make use of “aiortc”, a Python implementation of WebRTC. You can then grab whatever frames you want, apply image processing and even return the results as a video stream.

  3. Thanks for this incredible work, unfortunately I have a problem and I have not been able to solve it, I am using windows 10 and at the moment of executing the service it throws me the following error

    TypeError: Object of type ‘int32’ is not JSON serializable

    Any idea what it could be?

    • You are seeing this in the Python console output?

      I’m not sure why you are getting an error – I ran mine on Win10 and OSX. Offhand the only piece of the output JSON that is an integer is line 106 of the object_detection_apy.py: item.numObjects = obj_above_thresh. You could try to convert that to a string with str() or something like it to see if that works (or just remove that line).

      If you still have trouble please open an issue in the github repo where others are more likely to see it: https://github.com/webrtcHacks/tfObjWebrtc/issues

      • Yes is in the console, the probles seems come form File “D:\web\tfObjWebrtc\object_detection_api.py”, line 126, in get_objects
        outputJson = json.dumps([ob.__dict__ for ob in output])

  4. i wanna use my own trained model for myobjectdetection but when i try to run i get this error – – [02/Aug/2018 02:48:45] “[1m[35mPOST /image HTTP/1.1[0m” 500 –
    Traceback (most recent call last):
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 2309, in __call__
    return self.wsgi_app(environ, start_response)
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 2295, in wsgi_app
    response = self.handle_exception(e)
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 1741, in handle_exception
    reraise(exc_type, exc_value, tb)
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\_compat.py”, line 35, in reraise
    raise value
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 2292, in wsgi_app
    response = self.full_dispatch_request()
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 1816, in full_dispatch_request
    return self.finalize_request(rv)
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 1831, in finalize_request
    response = self.make_response(rv)
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 1982, in make_response
    reraise(TypeError, new_error, sys.exc_info()[2])
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\_compat.py”, line 34, in reraise
    raise value.with_traceback(tb)
    File “C:\Python36\lib\site-packages\flask-1.0.2-py3.6.egg\flask\app.py”, line 1974, in make_response
    rv = self.response_class.force_type(rv, request.environ)
    File “C:\Python36\lib\site-packages\werkzeug\wrappers.py”, line 921, in force_type
    response = BaseResponse(*_run_wsgi_app(response, environ))
    File “C:\Python36\lib\site-packages\werkzeug\test.py”, line 923, in run_wsgi_app
    app_rv = app(environ, start_response)
    TypeError: ‘InvalidArgumentError’ object is not callable
    The view function did not return a valid response. The return type must be a string, tuple, Response instance, or WSGI callable, but it was a InvalidArgumentError.

    im trying to use the model faster_rcnn_inception_v2_pets.config, i been relaced the routes for the frozen inference graph and labelmap.pbtxt but this error continues, can you help me? (my model is only one)

  5. pless help me

    app_rv = app(environ, start_response)
    TypeError: ‘JpegImageFile’ object is not callable
    The view function did not return a valid response. The return type must be a string, tuple, Response instance, or WSGI callable, but it was a JpegImageFile.

  6. Pingback: Receive webRTC video stream using python opencv in real-time – PythonCharm

  7. Hi, thanks for this detailed post. Very informative. I’ve developed a similar solution using aiortc – A WebRTC implementation in Python using asyncio. The result is a low latency real time object detection inference solution. The git repository can be found here: https://github.com/omarabid59/YOLO_Google-Cloud . Let me know if this is of interest to anyone and I can write a detailed post on it!

  8. Hi Omar. That sounds interesting and I have been meaning to give aiortc a try. Can you send an email to my chadwhart Gmail so we can discuss the details?

  9. Thank you so much for this tutorial, can you add a new article where we can use websockets instead of sending the image using post.

    or any suggestion on how to improve the speed of this??

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.