* _Gphoto2_ takes pictures (not vm) into a 'pictures/{1,2,3}' dir.
* These get splitted by *split_and_merge_output.py* to a 'segments' subdir. (To be able to detect smaller faces)
* Our modified _opencv-webcam-demo_ will analyse these images and output json
* The same *split_and_merge_output.py* process will detect the segment jsons and merge them back into the 'pictures/{1,2,3}' dir. It subsequently deletes the segmented photos
[TODO: remove duplicate faces on merge]
Now we have our dataset of photo's with JSONS.
To output:
*`parse_output.py -o ~/pictures/{1,2,3} --json --window-size 10 > ~/sdk-samples/output/talk1.json` Creates a json file suitable for webbrowser graphing
*`python parse_output.py -o ~/pictures/{1,2,3} --wav ~/pictures/{1,2,3}.wav --targetDir /home/crowd/sdk-samples/output/1 --window-size 10` Assumes a WAVE file that starts simultaneously with frame 1. and splits it into the windows
* Start server for graph.php & image.php `cd ~/sdk-samples && php -S 0.0.0.0:8000`
* Navigate to: `http://192.168.178.33:8000/graphs.php` to view images
Using split_and_merge_output, frames are split into segments so SMALL_FACES actually detects _really_ small faces. This requires enabling --segments on opencv-webcam-demo (so it detects/writes segment%06d files)
split_and_merge_output also remerges these segment%06.json files into a frame%06d.json file.