[D] Inference on video files using TensorRT Inference Server or Tensorflow Serving
I want to use technologies like TensorRT Inference Server or Tensorflow Serving to create a microservice architecture for analyzing video content using deeplearning models (CNN).
I have doubts on these points :
- What is the best way to store video files?
- What is the best way to extract frames and pass them to Tensorflow Inference Serving or TensorRT?
- Do I need a message broker like Apache Kafka? Can I use directly HTTP or gRPC to pass extracted frames to each microservice?
I searched a lot but I never found a similar architecture or some guidelines to manage inference on video files. Any advice will be appreciated, thanks in advance.
submitted by /u/vspara
[link] [comments]