cmake_modules | ||
common | ||
opencv-webcam-demo | ||
video-demo | ||
.gitignore | ||
.travis.yml | ||
affdex-win-samples.sln | ||
appveyor.yml | ||
CMakeLists.txt | ||
console.html | ||
graphs.php | ||
images.php | ||
ISSUE_TEMPLATE.txt | ||
LICENSE.txt | ||
lodash.core.min.js | ||
parse_output.py | ||
plotly-1.31.2.min.js | ||
README.md | ||
requirements.txt | ||
run.py | ||
split_and_merge_output.py |
The crowd will tell us
Pipeline
- Gphoto2 takes pictures (not vm) into a 'pictures/{1,2,3}' dir.
- These get splitted by split_and_merge_output.py to a 'segments' subdir. (To be able to detect smaller faces)
- Our modified opencv-webcam-demo will analyse these images and output json
- The same split_and_merge_output.py process will detect the segment jsons and merge them back into the 'pictures/{1,2,3}' dir. It subsequently deletes the segmented photos
[TODO: remove duplicate faces on merge]
Now we have our dataset of photo's with JSONS.
To output:
parse_output.py -o ~/pictures/{1,2,3} --json --window-size 10 > ~/sdk-samples/output/talk1.json
Creates a json file suitable for webbrowser graphingpython parse_output.py -o ~/pictures/{1,2,3} --wav ~/pictures/{1,2,3}.wav --targetDir /home/crowd/sdk-samples/output/1 --window-size 10
Assumes a WAVE file that starts simultaneously with frame 1. and splits it into the windows- Start server for graph.php & image.php
cd ~/sdk-samples && php -S 0.0.0.0:8000
- Navigate to:
http://192.168.178.33:8000/graphs.php
to view images
Done :-)
Commands
Start a few processes:
gphoto2 to capture images every 2 sec:
gphoto2 --port usb: --capture-image-and-download -I 2 --filename=/home/crowd/output/frame%06n.jpg
SET FPS IN parse_output.py
If using Sony A7s/r, Ubuntu 14.04 has a too old version of gphoto2, so use vm-host with gphoto. Mount pictures dir to vbox:
sudo mount -t vboxsf pictures ~/pictures -o rw,uid=1000,gid=1000
The modified 'webcam demo' to analyse and generate json:
/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo --data /home/crowd/affdex-sdk/data --faceMode 1 --numFaces 80 -o /home/crowd/output/segments/ --draw 0 --segments 1
When using --segments 1
also run:
python split_and_merge_output.py --frameOutput ~/output --segmentDir ~/output/segments/
Using split_and_merge_output, frames are split into segments so SMALL_FACES actually detects really small faces. This requires enabling --segments on opencv-webcam-demo (so it detects/writes segment%06d files) split_and_merge_output also remerges these segment%06.json files into a frame%06d.json file.
When done, generate output:
parse_output.py -o ~/output1 --json --window-size 1 > ~/sdk-samples/output/talk1.json
parse_output.py -o ~/output2 --json --window-size 1 > ~/sdk-samples/output/talk2.json
parse_output.py -o ~/output3 --json --window-size 1 > ~/sdk-samples/output/talk3.json
and audio, which expects wav to start simmmultaneously with frame00001:
python parse_output.py -o ~/pictures/1 --wav ~/pictures/1.wav --targetDir /home/crowd/sdk-samples/output/1 --window-size 1
python parse_output.py -o ~/pictures/2 --wav ~/pictures/2.wav --targetDir /home/crowd/sdk-samples/output/2 --window-size 1
python parse_output.py -o ~/pictures/3 --wav ~/pictures/3.wav --targetDir /home/crowd/sdk-samples/output/3 --window-size 1
Then start server:
cd ~/sdk-samples && php -S 0.0.0.0:8000
Navigate to: http://192.168.178.33:8000/graphs.php
to view images