Read subsequently numbered files
This commit is contained in:
parent
6e6415433f
commit
4f58f1a3cc
6 changed files with 405 additions and 243 deletions
118
README.md
118
README.md
|
@ -1,117 +1,9 @@
|
||||||
#Sample Apps for Affdex SDK for Windows and Linux
|
Start two processes:
|
||||||
|
|
||||||
Welcome to our repository on GitHub! Here you will find example code to get you started with our Affdex Linux SDK 3.2, Affdex Windows SDK 3.4 and begin emotion-enabling you own app! Documentation for the SDKs is available on the <a href=http://developer.affectiva.com/>Affectiva's Developer Portal</a>.
|
gphoto2 to capture images:
|
||||||
|
|
||||||
*Build Status*
|
`gphoto2 --port usb: --capture-image-and-download -I 1 --filename=/home/crowd/output/frame%06n.jpg`
|
||||||
- Windows: [![Build status](https://ci.appveyor.com/api/projects/status/pn2y9h8a3nnkiw41?svg=true)]
|
|
||||||
(https://ci.appveyor.com/project/ahamino/win-sdk-samples)
|
|
||||||
- Ubuntu: [![Build Status](https://travis-ci.org/Affectiva/cpp-sdk-samples.svg?branch=master)](https://travis-ci.org/Affectiva/cpp-sdk-samples)
|
|
||||||
|
|
||||||
Dependencies
|
The modified 'webcam demo' to analyse and generate json:
|
||||||
------------
|
|
||||||
|
|
||||||
*Windows*
|
`/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo --data /home/crowd/affdex-sdk/data --faceMode 1 --numFaces 80 -o /home/crowd/output-backup/ --draw 0`
|
||||||
- Affdex SDK 3.4 (64 bit)
|
|
||||||
- Visual Studio 2013 or higher
|
|
||||||
|
|
||||||
*Linux*
|
|
||||||
- Ubuntu 14.04 or CentOS 7
|
|
||||||
- Affdex SDK 3.2
|
|
||||||
- CMake 2.8 or higher
|
|
||||||
- GCC 4.8
|
|
||||||
|
|
||||||
*Additional dependencies*
|
|
||||||
|
|
||||||
- OpenCV 2.4
|
|
||||||
- Boost 1.55
|
|
||||||
- libuuid
|
|
||||||
- libcurl
|
|
||||||
- libopenssl
|
|
||||||
|
|
||||||
Installation
|
|
||||||
------------
|
|
||||||
|
|
||||||
|
|
||||||
*Windows*
|
|
||||||
- Download Affdex SDK [from here](https://knowledge.affectiva.com/docs/getting-started-with-the-emotion-sdk-for-windows)
|
|
||||||
- Install the SDK using MSI installer.
|
|
||||||
- The additional dependencies get installed automatically by NuGet.
|
|
||||||
|
|
||||||
*Ubuntu*
|
|
||||||
- Download Affdex SDK [from here](https://knowledge.affectiva.com/docs/getting-started-with-the-affectiva-sdk-for-linux)
|
|
||||||
|
|
||||||
```bashrc
|
|
||||||
sudo apt-get install build-essential libopencv-dev libboost1.55-all-dev libcurl4-openssl uuid-dev cmake
|
|
||||||
wget https://download.affectiva.com/linux/affdex-cpp-sdk-3.2-20-ubuntu-xenial-xerus-64bit.tar.gz
|
|
||||||
mkdir $HOME/affdex-sdk
|
|
||||||
tar -xzvf affdex-cpp-sdk-3.2-20-ubuntu-xenial-xerus-64bit.tar.gz -C $HOME/affdex-sdk
|
|
||||||
export AFFDEX_DATA_DIR=$HOME/affdex-sdk/data
|
|
||||||
git clone https://github.com/Affectiva/cpp-sdk-samples.git $HOME/sdk-samples
|
|
||||||
mkdir $HOME/build
|
|
||||||
cd $HOME/build
|
|
||||||
cmake -DOpenCV_DIR=/usr/ -DBOOST_ROOT=/usr/ -DAFFDEX_DIR=$HOME/affdex-sdk $HOME/sdk-samples
|
|
||||||
make
|
|
||||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/affdex-sdk/lib
|
|
||||||
```
|
|
||||||
|
|
||||||
*CentOS*
|
|
||||||
- Download Affdex SDK [from here](https://knowledge.affectiva.com/docs/getting-started-with-the-affectiva-sdk-for-linux)
|
|
||||||
|
|
||||||
```bashrc
|
|
||||||
sudo yum install libcurl-devel.x86_64 libuuid-devel.x86_64 opencv-devel cmake.x86_64
|
|
||||||
wget https://sourceforge.net/projects/boost/files/boost/1.55.0/boost_1_55_0.tar.gz/download -O boost_1_55_0.tar.gz
|
|
||||||
tar -xzvf boost_1_55_0.tar.gz -C $HOME
|
|
||||||
cd boost_1_55
|
|
||||||
./bootstrap.sh --with-libraries=log,serialization,system,date_time,filesystem,regex,timer,chrono,thread,program_options
|
|
||||||
sudo ./b2 link=static install
|
|
||||||
wget https://download.affectiva.com/linux/affdex-cpp-sdk-3.2-2893-centos-7-64bit.tar.gz
|
|
||||||
mkdir $HOME/affdex-sdk
|
|
||||||
tar -xzvf affdex-cpp-sdk-3.2-2893-centos-7-64bit.tar.gz -C $HOME/affdex-sdk
|
|
||||||
export AFFDEX_DATA_DIR=$HOME/affdex-sdk/data
|
|
||||||
git clone https://github.com/Affectiva/cpp-sdk-samples.git $HOME/sdk-samples
|
|
||||||
mkdir $HOME/build
|
|
||||||
cd $HOME/build
|
|
||||||
cmake -DOpenCV_DIR=/usr/ -DBOOST_ROOT=/usr/ -DAFFDEX_DIR=$HOME/affdex-sdk $HOME/sdk-samples
|
|
||||||
make
|
|
||||||
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/affdex-sdk/lib
|
|
||||||
```
|
|
||||||
|
|
||||||
OpenCV-webcam-demo (c++)
|
|
||||||
------------------
|
|
||||||
|
|
||||||
Project for demoing the [FrameDetector class](https://knowledge.affectiva.com/docs/analyze-a-video-frame-stream-3). It grabs frames from the camera, analyzes them and displays the results on screen.
|
|
||||||
|
|
||||||
The following command line arguments can be used to run it:
|
|
||||||
|
|
||||||
-h [ --help ] Display this help message.
|
|
||||||
-d [ --data ] arg (=data) Path to the data folder
|
|
||||||
-r [ --resolution ] arg (=640 480) Resolution in pixels (2-values): width
|
|
||||||
height
|
|
||||||
--pfps arg (=30) Processing framerate.
|
|
||||||
--cfps arg (=30) Camera capture framerate.
|
|
||||||
--bufferLen arg (=30) process buffer size.
|
|
||||||
--cid arg (=0) Camera ID.
|
|
||||||
--faceMode arg (=0) Face detector mode (large faces vs small
|
|
||||||
faces).
|
|
||||||
--numFaces arg (=1) Number of faces to be tracked.
|
|
||||||
--draw arg (=1) Draw metrics on screen.
|
|
||||||
|
|
||||||
Video-demo (c++)
|
|
||||||
----------
|
|
||||||
|
|
||||||
Project for demoing the Windows SDK [VideoDetector class](https://knowledge.affectiva.com/docs/analyze-a-recorded-video-file) and [PhotoDetector class](https://knowledge.affectiva.com/docs/analyze-a-photo-4). It processs video or image files, displays the emotion metrics and exports the results in a csv file.
|
|
||||||
|
|
||||||
The following command line arguments can be used to run it:
|
|
||||||
|
|
||||||
-h [ --help ] Display this help message.
|
|
||||||
-d [ --data ] arg (=data) Path to the data folder
|
|
||||||
-i [ --input ] arg Video or photo file to process.
|
|
||||||
--pfps arg (=30) Processing framerate.
|
|
||||||
--draw arg (=1) Draw video on screen.
|
|
||||||
--faceMode arg (=1) Face detector mode (large faces vs small
|
|
||||||
faces).
|
|
||||||
--numFaces arg (=1) Number of faces to be tracked.
|
|
||||||
--loop arg (=0) Loop over the video being processed.
|
|
||||||
|
|
||||||
|
|
||||||
For an example of how to use Affdex in a C# application .. please refer to [AffdexMe](https://github.com/affectiva/affdexme-win)
|
|
||||||
|
|
139
common/LoggingImageListener.hpp
Normal file
139
common/LoggingImageListener.hpp
Normal file
|
@ -0,0 +1,139 @@
|
||||||
|
#pragma once
|
||||||
|
|
||||||
|
|
||||||
|
#include <iostream>
|
||||||
|
#include <memory>
|
||||||
|
#include <chrono>
|
||||||
|
#include <thread>
|
||||||
|
#include <mutex>
|
||||||
|
#include <fstream>
|
||||||
|
#include <map>
|
||||||
|
#include <opencv2/highgui/highgui.hpp>
|
||||||
|
#include <opencv2/imgproc/imgproc.hpp>
|
||||||
|
#include <boost/filesystem.hpp>
|
||||||
|
#include <boost/timer/timer.hpp>
|
||||||
|
#include <boost/program_options.hpp>
|
||||||
|
#include <boost/algorithm/string.hpp>
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
#include "ImageListener.h"
|
||||||
|
|
||||||
|
|
||||||
|
using namespace affdex;
|
||||||
|
|
||||||
|
/**
|
||||||
|
* TOdo: make sure this handles logging to json on onImageResults()
|
||||||
|
*/
|
||||||
|
class LoggingImageListener : public ImageListener
|
||||||
|
{
|
||||||
|
|
||||||
|
std::mutex mMutex;
|
||||||
|
std::deque<std::pair<Frame, std::map<FaceId, Face> > > mDataArray;
|
||||||
|
|
||||||
|
double mCaptureLastTS;
|
||||||
|
double mCaptureFPS;
|
||||||
|
double mProcessLastTS;
|
||||||
|
double mProcessFPS;
|
||||||
|
std::ofstream &fStream;
|
||||||
|
std::chrono::time_point<std::chrono::system_clock> mStartT;
|
||||||
|
const bool mDrawDisplay;
|
||||||
|
const int spacing = 10;
|
||||||
|
const float font_size = 0.5f;
|
||||||
|
const int font = cv::FONT_HERSHEY_COMPLEX_SMALL;
|
||||||
|
|
||||||
|
std::vector<std::string> expressions;
|
||||||
|
std::vector<std::string> emotions;
|
||||||
|
std::vector<std::string> emojis;
|
||||||
|
std::vector<std::string> headAngles;
|
||||||
|
|
||||||
|
std::map<affdex::Glasses, std::string> glassesMap;
|
||||||
|
std::map<affdex::Gender, std::string> genderMap;
|
||||||
|
std::map<affdex::Age, std::string> ageMap;
|
||||||
|
std::map<affdex::Ethnicity, std::string> ethnicityMap;
|
||||||
|
|
||||||
|
public:
|
||||||
|
|
||||||
|
|
||||||
|
LoggingImageListener(std::ofstream &csv, const bool draw_display)
|
||||||
|
: fStream(csv), mDrawDisplay(draw_display), mStartT(std::chrono::system_clock::now()),
|
||||||
|
mCaptureLastTS(-1.0f), mCaptureFPS(-1.0f),
|
||||||
|
mProcessLastTS(-1.0f), mProcessFPS(-1.0f)
|
||||||
|
{
|
||||||
|
expressions = {
|
||||||
|
"smile", "innerBrowRaise", "browRaise", "browFurrow", "noseWrinkle",
|
||||||
|
"upperLipRaise", "lipCornerDepressor", "chinRaise", "lipPucker", "lipPress",
|
||||||
|
"lipSuck", "mouthOpen", "smirk", "eyeClosure", "attention", "eyeWiden", "cheekRaise",
|
||||||
|
"lidTighten", "dimpler", "lipStretch", "jawDrop"
|
||||||
|
};
|
||||||
|
|
||||||
|
emotions = {
|
||||||
|
"joy", "fear", "disgust", "sadness", "anger",
|
||||||
|
"surprise", "contempt", "valence", "engagement"
|
||||||
|
};
|
||||||
|
|
||||||
|
headAngles = { "pitch", "yaw", "roll" };
|
||||||
|
|
||||||
|
|
||||||
|
emojis = std::vector<std::string> {
|
||||||
|
"relaxed", "smiley", "laughing",
|
||||||
|
"kissing", "disappointed",
|
||||||
|
"rage", "smirk", "wink",
|
||||||
|
"stuckOutTongueWinkingEye", "stuckOutTongue",
|
||||||
|
"flushed", "scream"
|
||||||
|
};
|
||||||
|
|
||||||
|
genderMap = std::map<affdex::Gender, std::string> {
|
||||||
|
{ affdex::Gender::Male, "male" },
|
||||||
|
{ affdex::Gender::Female, "female" },
|
||||||
|
{ affdex::Gender::Unknown, "unknown" },
|
||||||
|
|
||||||
|
};
|
||||||
|
|
||||||
|
glassesMap = std::map<affdex::Glasses, std::string> {
|
||||||
|
{ affdex::Glasses::Yes, "yes" },
|
||||||
|
{ affdex::Glasses::No, "no" }
|
||||||
|
};
|
||||||
|
|
||||||
|
ageMap = std::map<affdex::Age, std::string> {
|
||||||
|
{ affdex::Age::AGE_UNKNOWN, "unknown"},
|
||||||
|
{ affdex::Age::AGE_UNDER_18, "under 18" },
|
||||||
|
{ affdex::Age::AGE_18_24, "18-24" },
|
||||||
|
{ affdex::Age::AGE_25_34, "25-34" },
|
||||||
|
{ affdex::Age::AGE_35_44, "35-44" },
|
||||||
|
{ affdex::Age::AGE_45_54, "45-54" },
|
||||||
|
{ affdex::Age::AGE_55_64, "55-64" },
|
||||||
|
{ affdex::Age::AGE_65_PLUS, "65 plus" }
|
||||||
|
};
|
||||||
|
|
||||||
|
ethnicityMap = std::map<affdex::Ethnicity, std::string> {
|
||||||
|
{ affdex::Ethnicity::UNKNOWN, "unknown"},
|
||||||
|
{ affdex::Ethnicity::CAUCASIAN, "caucasian" },
|
||||||
|
{ affdex::Ethnicity::BLACK_AFRICAN, "black african" },
|
||||||
|
{ affdex::Ethnicity::SOUTH_ASIAN, "south asian" },
|
||||||
|
{ affdex::Ethnicity::EAST_ASIAN, "east asian" },
|
||||||
|
{ affdex::Ethnicity::HISPANIC, "hispanic" }
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
void onImageResults(std::map<FaceId, Face> faces, Frame image) override
|
||||||
|
{
|
||||||
|
std::lock_guard<std::mutex> lg(mMutex);
|
||||||
|
mDataArray.push_back(std::pair<Frame, std::map<FaceId, Face>>(image, faces));
|
||||||
|
std::chrono::time_point<std::chrono::system_clock> now = std::chrono::system_clock::now();
|
||||||
|
std::chrono::milliseconds milliseconds = std::chrono::duration_cast<std::chrono::milliseconds>(now - mStartT);
|
||||||
|
double seconds = milliseconds.count() / 1000.f;
|
||||||
|
mProcessFPS = 1.0f / (seconds - mProcessLastTS);
|
||||||
|
mProcessLastTS = seconds;
|
||||||
|
};
|
||||||
|
|
||||||
|
void onImageCapture(Frame image) override
|
||||||
|
{
|
||||||
|
std::lock_guard<std::mutex> lg(mMutex);
|
||||||
|
mCaptureFPS = 1.0f / (image.getTimestamp() - mCaptureLastTS);
|
||||||
|
mCaptureLastTS = image.getTimestamp();
|
||||||
|
};
|
||||||
|
|
||||||
|
|
||||||
|
};
|
|
@ -332,7 +332,7 @@ public:
|
||||||
cv::putText(img, fps_str, cv::Point(img.cols - 110, img.rows - left_margin - spacing), font, font_size, clr);
|
cv::putText(img, fps_str, cv::Point(img.cols - 110, img.rows - left_margin - spacing), font, font_size, clr);
|
||||||
sprintf(fps_str, "process fps: %2.0f", mProcessFPS);
|
sprintf(fps_str, "process fps: %2.0f", mProcessFPS);
|
||||||
cv::putText(img, fps_str, cv::Point(img.cols - 110, img.rows - left_margin), font, font_size, clr);
|
cv::putText(img, fps_str, cv::Point(img.cols - 110, img.rows - left_margin), font, font_size, clr);
|
||||||
|
cv::namedWindow("analyze video", CV_WINDOW_NORMAL);
|
||||||
cv::imshow("analyze video", img);
|
cv::imshow("analyze video", img);
|
||||||
std::lock_guard<std::mutex> lg(mMutex);
|
std::lock_guard<std::mutex> lg(mMutex);
|
||||||
cv::waitKey(30);
|
cv::waitKey(30);
|
||||||
|
|
|
@ -14,6 +14,7 @@
|
||||||
|
|
||||||
#include "AFaceListener.hpp"
|
#include "AFaceListener.hpp"
|
||||||
#include "PlottingImageListener.hpp"
|
#include "PlottingImageListener.hpp"
|
||||||
|
#include "LoggingImageListener.hpp"
|
||||||
#include "StatusListener.hpp"
|
#include "StatusListener.hpp"
|
||||||
|
|
||||||
|
|
||||||
|
@ -49,9 +50,9 @@ FeaturePoint maxPoint(VecFeaturePoint points)
|
||||||
std::string getAsJson(int framenr, const std::map<FaceId, Face> faces, const double timeStamp)
|
std::string getAsJson(int framenr, const std::map<FaceId, Face> faces, const double timeStamp)
|
||||||
{
|
{
|
||||||
std::stringstream ss;
|
std::stringstream ss;
|
||||||
ss << "{" << "'t':" << timeStamp << ",";
|
ss << "{" << "\"t\":" << timeStamp << ",";
|
||||||
ss << "'nr':" << framenr << ",";
|
ss << "\"nr\":" << framenr << ",";
|
||||||
ss << "'faces':[";
|
ss << "\"faces\":[";
|
||||||
|
|
||||||
int i(0);
|
int i(0);
|
||||||
|
|
||||||
|
@ -78,7 +79,7 @@ std::string getAsJson(int framenr, const std::map<FaceId, Face> faces, const dou
|
||||||
float *values = (float *)&f.measurements.orientation;
|
float *values = (float *)&f.measurements.orientation;
|
||||||
for (std::string angle : { "pitch", "yaw", "roll" })
|
for (std::string angle : { "pitch", "yaw", "roll" })
|
||||||
{
|
{
|
||||||
ss << "'" << angle << "':" << (*values) << ",";
|
ss << "\"" << angle << "\":" << (*values) << ",";
|
||||||
values++;
|
values++;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -88,7 +89,7 @@ std::string getAsJson(int framenr, const std::map<FaceId, Face> faces, const dou
|
||||||
"surprise", "contempt", "valence", "engagement"
|
"surprise", "contempt", "valence", "engagement"
|
||||||
})
|
})
|
||||||
{
|
{
|
||||||
ss << "'" << emotion << "':" << (*values) << ",";
|
ss << "\"" << emotion << "\":" << (*values) << ",";
|
||||||
values++;
|
values++;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -100,18 +101,18 @@ std::string getAsJson(int framenr, const std::map<FaceId, Face> faces, const dou
|
||||||
"lidTighten", "dimpler", "lipStretch", "jawDrop"
|
"lidTighten", "dimpler", "lipStretch", "jawDrop"
|
||||||
})
|
})
|
||||||
{
|
{
|
||||||
ss << "'" << expression << "':" << (*values) << ",";
|
ss << "\"" << expression << "\":" << (*values) << ",";
|
||||||
values++;
|
values++;
|
||||||
}
|
}
|
||||||
|
|
||||||
FeaturePoint tl = minPoint(f.featurePoints);
|
FeaturePoint tl = minPoint(f.featurePoints);
|
||||||
FeaturePoint br = maxPoint(f.featurePoints);
|
FeaturePoint br = maxPoint(f.featurePoints);
|
||||||
|
|
||||||
ss << "'rect':{'x':" << tl.x << ",'y':" << tl.y
|
ss << "\"rect\":{\"x\":" << tl.x << ",\"y\":" << tl.y
|
||||||
<< ",'w':" << (br.x - tl.x) << ",'h':" << (br.y - tl.y) << "},";
|
<< ",\"w\":" << (br.x - tl.x) << ",\"h\":" << (br.y - tl.y) << "},";
|
||||||
|
|
||||||
ss << "'ioDistance':"<< f.measurements.interocularDistance << ",";
|
ss << "\"ioDistance\":"<< f.measurements.interocularDistance << ",";
|
||||||
ss << "'id':"<< f.id;
|
ss << "\"id\":"<< f.id;
|
||||||
ss << "}";
|
ss << "}";
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -138,12 +139,12 @@ int main(int argsc, char ** argsv)
|
||||||
|
|
||||||
std::vector<int> resolution;
|
std::vector<int> resolution;
|
||||||
int process_framerate = 30;
|
int process_framerate = 30;
|
||||||
int camera_framerate = 15;
|
|
||||||
int buffer_length = 2;
|
int buffer_length = 2;
|
||||||
int camera_id = 0;
|
|
||||||
unsigned int nFaces = 1;
|
unsigned int nFaces = 1;
|
||||||
bool draw_display = true;
|
bool draw_display = true;
|
||||||
int faceDetectorMode = (int)FaceDetectorMode::LARGE_FACES;
|
int faceDetectorMode = (int)FaceDetectorMode::SMALL_FACES;
|
||||||
|
boost::filesystem::path imgPath("~/emo_in_file.jpg");
|
||||||
|
boost::filesystem::path outPath("~/output/");
|
||||||
|
|
||||||
float last_timestamp = -1.0f;
|
float last_timestamp = -1.0f;
|
||||||
float capture_fps = -1.0f;
|
float capture_fps = -1.0f;
|
||||||
|
@ -160,14 +161,13 @@ int main(int argsc, char ** argsv)
|
||||||
#else // _WIN32
|
#else // _WIN32
|
||||||
("data,d", po::value< affdex::path >(&DATA_FOLDER)->default_value(affdex::path("data"), std::string("data")), "Path to the data folder")
|
("data,d", po::value< affdex::path >(&DATA_FOLDER)->default_value(affdex::path("data"), std::string("data")), "Path to the data folder")
|
||||||
#endif // _WIN32
|
#endif // _WIN32
|
||||||
("resolution,r", po::value< std::vector<int> >(&resolution)->default_value(DEFAULT_RESOLUTION, "640 480")->multitoken(), "Resolution in pixels (2-values): width height")
|
|
||||||
("pfps", po::value< int >(&process_framerate)->default_value(30), "Processing framerate.")
|
("pfps", po::value< int >(&process_framerate)->default_value(30), "Processing framerate.")
|
||||||
("cfps", po::value< int >(&camera_framerate)->default_value(30), "Camera capture framerate.")
|
|
||||||
("bufferLen", po::value< int >(&buffer_length)->default_value(30), "process buffer size.")
|
("bufferLen", po::value< int >(&buffer_length)->default_value(30), "process buffer size.")
|
||||||
("cid", po::value< int >(&camera_id)->default_value(0), "Camera ID.")
|
("faceMode", po::value< int >(&faceDetectorMode)->default_value((int)FaceDetectorMode::SMALL_FACES), "Face detector mode (large faces vs small faces).")
|
||||||
("faceMode", po::value< int >(&faceDetectorMode)->default_value((int)FaceDetectorMode::LARGE_FACES), "Face detector mode (large faces vs small faces).")
|
|
||||||
("numFaces", po::value< unsigned int >(&nFaces)->default_value(1), "Number of faces to be tracked.")
|
("numFaces", po::value< unsigned int >(&nFaces)->default_value(1), "Number of faces to be tracked.")
|
||||||
("draw", po::value< bool >(&draw_display)->default_value(true), "Draw metrics on screen.")
|
("draw", po::value< bool >(&draw_display)->default_value(true), "Draw metrics on screen.")
|
||||||
|
//~ ("file,f", po::value< boost::filesystem::path >(&imgPath)->default_value(imgPath), "Filename of image that is watched/tracked for changes.")
|
||||||
|
("frameOutput,o", po::value< boost::filesystem::path >(&outPath)->default_value(outPath), "Directory to store the frame in (and json)")
|
||||||
;
|
;
|
||||||
po::variables_map args;
|
po::variables_map args;
|
||||||
try
|
try
|
||||||
|
@ -194,14 +194,11 @@ int main(int argsc, char ** argsv)
|
||||||
std::cerr << description << std::endl;
|
std::cerr << description << std::endl;
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
if (resolution.size() != 2)
|
if (!boost::filesystem::exists(outPath))
|
||||||
{
|
{
|
||||||
std::cerr << "Only two numbers must be specified for resolution." << std::endl;
|
std::cerr << "Folder doesn't exist: " << outPath.native() << std::endl << std::endl;;
|
||||||
return 1;
|
std::cerr << "Try specifying the output folder through the command line" << std::endl;
|
||||||
}
|
std::cerr << description << std::endl;
|
||||||
else if (resolution[0] <= 0 || resolution[1] <= 0)
|
|
||||||
{
|
|
||||||
std::cerr << "Resolutions must be positive number." << std::endl;
|
|
||||||
return 1;
|
return 1;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -223,27 +220,7 @@ int main(int argsc, char ** argsv)
|
||||||
frameDetector->setFaceListener(faceListenPtr.get());
|
frameDetector->setFaceListener(faceListenPtr.get());
|
||||||
frameDetector->setProcessStatusListener(videoListenPtr.get());
|
frameDetector->setProcessStatusListener(videoListenPtr.get());
|
||||||
|
|
||||||
/*std::string cameraPipeline;
|
|
||||||
cameraPipeline ="v4l2src device=/dev/video0 extra-controls=\"c,exposure_auto=1,exposure_absolute=500\" ! ";
|
|
||||||
cameraPipeline+="video/x-raw, format=BGR, framerate=30/1, width=(int)1280,height=(int)720 ! ";
|
|
||||||
cameraPipeline+="appsink";
|
|
||||||
|
|
||||||
cv::VideoCapture webcam;
|
|
||||||
webcam.open(cameraPipeline);*/
|
|
||||||
cv::VideoCapture webcam(camera_id); //Connect to the first webcam
|
|
||||||
std::cerr << "Camera: " << camera_id << std::endl;
|
|
||||||
std::cerr << "- Setting the frame rate to: " << camera_framerate << std::endl;
|
|
||||||
//~ webcam.set(CV_CAP_PROP_FPS, camera_framerate); //Set webcam framerate.
|
|
||||||
std::cerr << "- Setting the resolution to: " << resolution[0] << "*" << resolution[1] << std::endl;
|
|
||||||
webcam.set(CV_CAP_PROP_FRAME_HEIGHT, 240);
|
|
||||||
webcam.set(CV_CAP_PROP_FRAME_WIDTH, 320);
|
|
||||||
|
|
||||||
auto start_time = std::chrono::system_clock::now();
|
auto start_time = std::chrono::system_clock::now();
|
||||||
if (!webcam.isOpened())
|
|
||||||
{
|
|
||||||
std::cerr << "Error opening webcam!" << std::endl;
|
|
||||||
return 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
std::cout << "Max num of faces set to: " << frameDetector->getMaxNumberFaces() << std::endl;
|
std::cout << "Max num of faces set to: " << frameDetector->getMaxNumberFaces() << std::endl;
|
||||||
std::string mode;
|
std::string mode;
|
||||||
|
@ -262,19 +239,28 @@ int main(int argsc, char ** argsv)
|
||||||
|
|
||||||
//Start the frame detector thread.
|
//Start the frame detector thread.
|
||||||
frameDetector->start();
|
frameDetector->start();
|
||||||
int framenr = 0;
|
|
||||||
do{
|
|
||||||
|
|
||||||
/* cv::Mat img;
|
int frameNrIn = 1;
|
||||||
if (!webcam.read(img)) //Capture an image from the camera
|
int frameNrOut = 1;
|
||||||
{
|
std::time_t lastImgUpdate(0);
|
||||||
std::cerr << "Failed to read frame from webcam! " << std::endl;
|
while(true){ //(cv::waitKey(20) != -1);
|
||||||
break;
|
char buff[100];
|
||||||
}*/
|
snprintf(buff, sizeof(buff), "frame%06d.jpg", frameNrIn);
|
||||||
std::string infile = "/home/crowd/IMG_0011.JPG";
|
boost::filesystem::path imgPath = outPath / buff;
|
||||||
cv::Mat img = imread(infile, 1);
|
if ( !boost::filesystem::exists( imgPath.native() )|| frameNrIn > frameNrOut ) {
|
||||||
|
// wait for file to appear
|
||||||
|
// and for the in file to be parsed (frame out)
|
||||||
|
usleep(5000); // wait 1/20 sec to avoid useless fast loop
|
||||||
|
} else {
|
||||||
|
std::cerr << "Read " << imgPath.native() << std::endl;
|
||||||
|
|
||||||
//~ imread(img);
|
char buff[100];
|
||||||
|
snprintf(buff, sizeof(buff), "frame%06d.json", frameNrIn);
|
||||||
|
boost::filesystem::path jsonPath = outPath / buff;
|
||||||
|
|
||||||
|
// don't redo existing jsons
|
||||||
|
if( !boost::filesystem::exists( jsonPath.native() )) {
|
||||||
|
cv::Mat img = imread(imgPath.native(), 1);
|
||||||
|
|
||||||
//Calculate the Image timestamp and the capture frame rate;
|
//Calculate the Image timestamp and the capture frame rate;
|
||||||
const auto milliseconds = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - start_time);
|
const auto milliseconds = std::chrono::duration_cast<std::chrono::milliseconds>(std::chrono::system_clock::now() - start_time);
|
||||||
|
@ -285,11 +271,16 @@ int main(int argsc, char ** argsv)
|
||||||
capture_fps = 1.0f / (seconds - last_timestamp);
|
capture_fps = 1.0f / (seconds - last_timestamp);
|
||||||
last_timestamp = seconds;
|
last_timestamp = seconds;
|
||||||
frameDetector->process(f); //Pass the frame to detector
|
frameDetector->process(f); //Pass the frame to detector
|
||||||
|
} else {
|
||||||
|
frameNrOut ++; // this won't happen later, but nr. should stay equal if skipping items.
|
||||||
|
}
|
||||||
|
|
||||||
// For each frame processed
|
frameNrIn++;
|
||||||
|
}
|
||||||
|
|
||||||
|
// For each frame processed (returns async)
|
||||||
if (listenPtr->getDataSize() > 0)
|
if (listenPtr->getDataSize() > 0)
|
||||||
{
|
{
|
||||||
framenr++;
|
|
||||||
|
|
||||||
std::pair<Frame, std::map<FaceId, Face> > dataPoint = listenPtr->getData();
|
std::pair<Frame, std::map<FaceId, Face> > dataPoint = listenPtr->getData();
|
||||||
Frame frame = dataPoint.first;
|
Frame frame = dataPoint.first;
|
||||||
|
@ -301,36 +292,23 @@ int main(int argsc, char ** argsv)
|
||||||
listenPtr->draw(faces, frame);
|
listenPtr->draw(faces, frame);
|
||||||
}
|
}
|
||||||
|
|
||||||
// std::cerr << "timestamp: " << frame.getTimestamp()
|
std::string json = getAsJson(frameNrOut, faces, frame.getTimestamp());
|
||||||
// << " cfps: " << listenPtr->getCaptureFrameRate()
|
std::cout << json << std::endl;
|
||||||
// << " pfps: " << listenPtr->getProcessingFrameRate()
|
|
||||||
// << " faces: " << faces.size() << endl;
|
|
||||||
|
|
||||||
//Output metrics to the file
|
|
||||||
//listenPtr->outputToFile(faces, frame.getTimestamp());
|
|
||||||
|
|
||||||
std:cout << getAsJson(framenr, faces, frame.getTimestamp()) << std::endl;
|
|
||||||
|
|
||||||
|
// store json
|
||||||
char buff[100];
|
char buff[100];
|
||||||
snprintf(buff, sizeof(buff), "frame%06d.jpg", framenr);
|
snprintf(buff, sizeof(buff), "frame%06d.json", frameNrOut);
|
||||||
std::string targetFilename = buff; // convert to std::string
|
boost::filesystem::path targetFilename = outPath / buff;
|
||||||
|
std::ofstream out(targetFilename.native());
|
||||||
|
std::cerr << "write "<< targetFilename.native() << std::endl;
|
||||||
|
out << json << "\n";
|
||||||
|
out.close();
|
||||||
|
|
||||||
vector<int> compression_params;
|
frameNrOut++;
|
||||||
compression_params.push_back(CV_IMWRITE_JPEG_QUALITY);
|
|
||||||
compression_params.push_back(90);
|
|
||||||
|
|
||||||
imwrite(targetFilename, img, compression_params);
|
|
||||||
|
|
||||||
break;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#ifdef _WIN32
|
|
||||||
while (!GetAsyncKeyState(VK_ESCAPE) && videoListenPtr->isRunning());
|
|
||||||
#else // _WIN32
|
|
||||||
while (videoListenPtr->isRunning());//(cv::waitKey(20) != -1);
|
|
||||||
#endif
|
|
||||||
std::cerr << "Stopping FrameDetector Thread" << endl;
|
std::cerr << "Stopping FrameDetector Thread" << endl;
|
||||||
frameDetector->stop(); //Stop frame detector thread
|
frameDetector->stop(); //Stop frame detector thread
|
||||||
}
|
}
|
||||||
|
|
111
parse_output.py
Normal file
111
parse_output.py
Normal file
|
@ -0,0 +1,111 @@
|
||||||
|
import os
|
||||||
|
from PIL import Image, ImageDraw
|
||||||
|
import argparse
|
||||||
|
import json
|
||||||
|
|
||||||
|
|
||||||
|
parser = argparse.ArgumentParser(description='Parses opencv-webcam-demo json output files and collects statistics')
|
||||||
|
parser.add_argument('--frameOutput', '-o', required=True, help='directory to look for frames & json')
|
||||||
|
|
||||||
|
args = parser.parse_args()
|
||||||
|
|
||||||
|
class Face:
|
||||||
|
def __init__(self, frame, data):
|
||||||
|
self.id = data['id']
|
||||||
|
self.frame = frame # Frame class
|
||||||
|
self.data = data # json data
|
||||||
|
|
||||||
|
def getFaceImg(self):
|
||||||
|
r = self.data['rect']
|
||||||
|
return self.frame.getImg().crop((int(r['x']), int(r['y']), int(r['x']+r['w']), int(r['y']+r['h'])))
|
||||||
|
|
||||||
|
class Frame:
|
||||||
|
"""
|
||||||
|
Everything for an analysed frame
|
||||||
|
"""
|
||||||
|
def __init__(self, outputPath, nr):
|
||||||
|
self.outputPath = outputPath
|
||||||
|
self.nr = nr
|
||||||
|
self.name = "frame%06d" % nr
|
||||||
|
self.jsonPath = os.path.join(outputPath, ("frame%06d" % (nr)) + ".json")
|
||||||
|
self.imgPath = os.path.join(outputPath, self.name + ".jpg")
|
||||||
|
self.faces = None # init with getFaces
|
||||||
|
|
||||||
|
def getTime(self):
|
||||||
|
return os.path.getmtime(self.imgPath)
|
||||||
|
|
||||||
|
def getJson(self):
|
||||||
|
#~ try:
|
||||||
|
with open(self.jsonPath) as fp:
|
||||||
|
return json.load(fp)
|
||||||
|
#~ except Exception as e:
|
||||||
|
#~ # no json file yet?
|
||||||
|
#~ return None
|
||||||
|
|
||||||
|
def getImg(self):
|
||||||
|
return Image.open(self.imgPath)
|
||||||
|
|
||||||
|
def getFaces(self):
|
||||||
|
if self.faces is None:
|
||||||
|
j = self.getJson()
|
||||||
|
|
||||||
|
self.faces = [Face(self, f) for f in j['faces']]
|
||||||
|
|
||||||
|
return self.faces
|
||||||
|
|
||||||
|
def exists(self):
|
||||||
|
return os.path.exists(self.jsonPath) and os.path.exists(self.imgPath)
|
||||||
|
|
||||||
|
frames = {}
|
||||||
|
|
||||||
|
def loadFrames(frameDir):
|
||||||
|
global frames
|
||||||
|
nr = 2
|
||||||
|
nextFrame = Frame(frameDir, nr)
|
||||||
|
# TODO; make threaded and infinite loop that updates global frames
|
||||||
|
while nextFrame.exists():
|
||||||
|
frames[nr] = nextFrame
|
||||||
|
nr+=1
|
||||||
|
nextFrame = Frame(frameDir, nr)
|
||||||
|
return frames
|
||||||
|
|
||||||
|
def cutOutFaces(frame, targetDir):
|
||||||
|
for faceNr, face in enumerate(frame.getFaces()):
|
||||||
|
print(faceNr, face)
|
||||||
|
img = face.getFaceImg()
|
||||||
|
faceImgPath = os.path.join(targetDir, frame.name + "-%s.jpg" % face.id)
|
||||||
|
print(faceImgPath)
|
||||||
|
img.save(faceImgPath)
|
||||||
|
pass
|
||||||
|
|
||||||
|
frames = loadFrames(args.frameOutput)
|
||||||
|
|
||||||
|
lastTime = None
|
||||||
|
for frameNr, frame in frames.items():
|
||||||
|
thisTime = frame.getJson()['t']
|
||||||
|
#~ print(frameNr, thisTime)
|
||||||
|
if not (lastTime is None) and lastTime > thisTime:
|
||||||
|
print "ERRROR!!"
|
||||||
|
lastTime = thisTime
|
||||||
|
|
||||||
|
faceDir = os.path.join(args.frameOutput, 'faces')
|
||||||
|
|
||||||
|
if not os.path.exists(faceDir):
|
||||||
|
os.mkdir(faceDir)
|
||||||
|
|
||||||
|
def sumEmotions():
|
||||||
|
total = 0.
|
||||||
|
summed = 0.
|
||||||
|
items = 0
|
||||||
|
for frameNr, frame in frames.items():
|
||||||
|
for face in frame.getFaces():
|
||||||
|
total += abs(face.data['valence'])
|
||||||
|
summed += face.data['valence']
|
||||||
|
items += 1
|
||||||
|
|
||||||
|
average = summed / items
|
||||||
|
print ("Total emotion %d, positivity score %d (average: %s)" % (total, summed, average))
|
||||||
|
|
||||||
|
sumEmotions()
|
||||||
|
#~ for frameNr, frame in frames.items():
|
||||||
|
#~ cutOutFaces(frame, faceDir)
|
108
run.py
108
run.py
|
@ -1,45 +1,87 @@
|
||||||
#sudo ~/build/opencv-webcam-demo/opencv-webcam-demo --data ~/affdex-sdk/data --faceMode 1 --numFaces 40 --draw 1
|
#sudo ~/build/opencv-webcam-demo/opencv-webcam-demo --data ~/affdex-sdk/data --faceMode 1 --numFaces 40 --draw 1
|
||||||
|
#sudo ~/build/opencv-webcam-demo/opencv-webcam-demo --data ~/affdex-sdk/data --faceMode 1 --numFaces 100 -o ~/output -f ~/emo_in_file.jpg
|
||||||
import subprocess
|
import subprocess
|
||||||
from SimpleWebSocketServer import SimpleWebSocketServer, WebSocket
|
import json
|
||||||
|
|
||||||
proc = subprocess.Popen([
|
import threading
|
||||||
|
import logging
|
||||||
|
|
||||||
|
logging.basicConfig(level=logging.DEBUG,
|
||||||
|
format='(%(threadName)-10s) %(message)s',
|
||||||
|
)
|
||||||
|
|
||||||
|
outputDir = "/home/crowd/output"
|
||||||
|
tmpImgFile = "/home/crowd/emo_in_file.jpg"
|
||||||
|
|
||||||
|
|
||||||
|
def handleLine(msg):
|
||||||
|
try:
|
||||||
|
j = json.loads(msg)
|
||||||
|
except Exception as e:
|
||||||
|
logging.error("Couldn't parse json " + msg)
|
||||||
|
return
|
||||||
|
|
||||||
|
#now we have json
|
||||||
|
logging.debug(j)
|
||||||
|
|
||||||
|
print " ".join([
|
||||||
|
'gphoto2',
|
||||||
|
"--port", "usb:",
|
||||||
|
"--capture-image-and-download",
|
||||||
|
"-I", "1", # photo every second
|
||||||
|
"--filename="+tmpImgFile, "--force-overwrite",
|
||||||
|
])
|
||||||
|
print " ".join([
|
||||||
'/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo',
|
'/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo',
|
||||||
"--data", "/home/crowd/affdex-sdk/data",
|
"--data", "/home/crowd/affdex-sdk/data",
|
||||||
"--faceMode", "1",
|
"--faceMode", "1",
|
||||||
"--numFaces", "40",
|
"--numFaces", "40",
|
||||||
"--draw", "1",
|
"--draw", "1",
|
||||||
"--pfps", "5",
|
"-o", outputDir,
|
||||||
"--cfps", "5",
|
"-f", tmpImgFile,
|
||||||
|
])
|
||||||
|
|
||||||
|
|
||||||
|
# gphoto2 --port usb: --capture-image-and-download -I 1 --filename=~/test.jpg --force-overwrite
|
||||||
|
def captureImages():
|
||||||
|
procCapture = subprocess.Popen([
|
||||||
|
'gphoto2',
|
||||||
|
"--port", "usb:",
|
||||||
|
"--capture-image-and-download",
|
||||||
|
"-I", "1", # photo every second
|
||||||
|
"--filename="+tmpImgFile, "--force-overwrite",
|
||||||
],stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
],stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
|
||||||
|
while procCapture.poll() is None:
|
||||||
clients = []
|
line = procCapture.stdout.readline()
|
||||||
class EchoOutput(WebSocket):
|
|
||||||
|
|
||||||
# def handleMessage(self):
|
|
||||||
# # echo message back to client
|
|
||||||
# self.sendMessage(self.data)
|
|
||||||
|
|
||||||
def handleConnected(self):
|
|
||||||
clients.append(self)
|
|
||||||
print(self.address, 'connected')
|
|
||||||
|
|
||||||
def handleClose(self):
|
|
||||||
clients.remove(self)
|
|
||||||
print(self.address, 'closed')
|
|
||||||
|
|
||||||
server = SimpleWebSocketServer('', 8080, EchoOutput)
|
|
||||||
|
|
||||||
def send_message(msg):
|
|
||||||
print "send", msg, "to", len(clients), "clients"
|
|
||||||
for client in list(clients):
|
|
||||||
client.sendMessage(u''+msg)
|
|
||||||
|
|
||||||
while proc.poll() is None:
|
|
||||||
server.serveonce()
|
|
||||||
line = proc.stdout.readline()
|
|
||||||
if line == '':
|
if line == '':
|
||||||
continue
|
continue
|
||||||
send_message(line)
|
logging.debug(line)
|
||||||
#print "test:", line.rstrip()
|
if line.startswith("*** Error"):
|
||||||
|
raise Exception("Camera not found on USB, or unable to claim it")
|
||||||
|
return
|
||||||
|
|
||||||
|
def processImages():
|
||||||
|
procProcess = subprocess.Popen([
|
||||||
|
'/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo',
|
||||||
|
"--data", "/home/crowd/affdex-sdk/data",
|
||||||
|
"--faceMode", "1",
|
||||||
|
"--numFaces", "40",
|
||||||
|
"--draw", "1",
|
||||||
|
"-o", outputDir,
|
||||||
|
"-f", tmpImgFile,
|
||||||
|
],stdout=subprocess.PIPE, stderr=subprocess.PIPE)
|
||||||
|
while procProcess.poll() is None:
|
||||||
|
line = procProcess.stdout.readline()
|
||||||
|
if line == '':
|
||||||
|
continue
|
||||||
|
|
||||||
|
handleLine(line)
|
||||||
|
return
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
captureThread = threading.Thread(name='capture', target=captureImages)
|
||||||
|
processThread = threading.Thread(name='process', target=processImages)
|
||||||
|
|
||||||
|
captureThread.start()
|
||||||
|
processThread.start()
|
||||||
|
|
Loading…
Reference in a new issue