Figure 1 shows the running of the MediaPipe face detection example in the Visualizer
MediaPipe Visualizer (see Figure 2) is hosted at viz.mediapipe.dev. MediaPipe graphs can be inspected by pasting graph code into the Editor tab or by uploading that graph file into the Visualizer. A user can pan and zoom into the graphical representation of the graph using the mouse and scroll wheel. The graph will also react to changes made within the editor in real time.
Figure 2 MediaPipe Visualizer hosted at https://viz.mediapipe.dev
Demos on MediaPipe Visualizer
We have created several sample Visualizer demos from existing MediaPipe graph examples. These can be seen within the Visualizer by visiting the following addresses in your Chrome browser:
Each of these demos can be executed within the browser by clicking on the little running man icon at the top of the editor (it will be greyed out if a non-demo workspace is loaded):
This will open a new tab which will run the current graph (this requires a web-cam).
Finally, we packaged up all the requisite demo assets (ML models and auxiliary text/data files) as individual binary data packages, to be loaded at runtime. And for graphics and rendering, we allow MediaPipe to automatically tap directly into WebGL so that most OpenGL-based calculators can “just work” on the web.
Currently, support for web-based MediaPipe has some important limitations:
- Only calculators in the demo graphs above may be used
- The user must edit one of the template graphs; they cannot provide their own from scratch
- The user cannot add or alter assets
- The executor for the graph must be single-threaded (i.e.
- TensorFlow Lite inference on GPU is not supported
We plan to continue to build upon this new platform to provide developers with much more control, removing many if not all of these limitations (e.g. by allowing for dynamic management of assets). Please follow the MediaPipe tag on the Google Developer blog and Google Developer twitter account. (@googledevs)
We would like to thank Marat Dukhan, Chuo-Ling Chang, Jianing Wei, Ming Guang Yong, and Matthias Grundmann for contributing to this blog post.
Source: MediaPipe on the Web