moodmeter/README.md

1.9 KiB

Start two processes:

gphoto2 to capture images every 2 sec:

gphoto2 --port usb: --capture-image-and-download -I 2 --filename=/home/crowd/output/frame%06n.jpg

SET FPS IN parse_output.py

If using Sony A7s/r, Ubuntu 14.04 has a too old version of gphoto2, so use vm-host with gphoto. Mount pictures dir to vbox:

sudo mount -t vboxsf pictures ~/pictures -o rw,uid=1000,gid=1000

The modified 'webcam demo' to analyse and generate json:

/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo --data /home/crowd/affdex-sdk/data --faceMode 1 --numFaces 80 -o /home/crowd/output/segments/ --draw 0 --segments 1

When using --segments 1 also run:

python split_and_merge_output.py --frameOutput ~/output --segmentDir ~/output/segments/

Using split_and_merge_output, frames are split into segments so SMALL_FACES actually detects really small faces. This requires enabling --segments on opencv-webcam-demo (so it detects/writes segment%06d files) split_and_merge_output also remerges these segment%06.json files into a frame%06d.json file.

When done, generate output:

parse_output.py -o ~/output1 --json --window-size 1 > ~/sdk-samples/output/talk1.json parse_output.py -o ~/output2 --json --window-size 1 > ~/sdk-samples/output/talk2.json parse_output.py -o ~/output3 --json --window-size 1 > ~/sdk-samples/output/talk3.json

and audio, which expects wav to start simmmultaneously with frame00001:

python parse_output.py -o ~/pictures/1 --wav ~/pictures/1.wav --targetDir /home/crowd/sdk-samples/output/1 --window-size 1 python parse_output.py -o ~/pictures/2 --wav ~/pictures/2.wav --targetDir /home/crowd/sdk-samples/output/2 --window-size 1 python parse_output.py -o ~/pictures/3 --wav ~/pictures/3.wav --targetDir /home/crowd/sdk-samples/output/3 --window-size 1

Then start server:

cd ~/sdk-samples && php -S 0.0.0.0:8000

Navigate to: http://192.168.178.33:8000/graphs.php to view images