diff --git a/README.md b/README.md index 1b5a4c9..3a27614 100644 --- a/README.md +++ b/README.md @@ -1,117 +1,9 @@ -#Sample Apps for Affdex SDK for Windows and Linux +Start two processes: -Welcome to our repository on GitHub! Here you will find example code to get you started with our Affdex Linux SDK 3.2, Affdex Windows SDK 3.4 and begin emotion-enabling you own app! Documentation for the SDKs is available on the Affectiva's Developer Portal. +gphoto2 to capture images: -*Build Status* -- Windows: [![Build status](https://ci.appveyor.com/api/projects/status/pn2y9h8a3nnkiw41?svg=true)] -(https://ci.appveyor.com/project/ahamino/win-sdk-samples) -- Ubuntu: [![Build Status](https://travis-ci.org/Affectiva/cpp-sdk-samples.svg?branch=master)](https://travis-ci.org/Affectiva/cpp-sdk-samples) +`gphoto2 --port usb: --capture-image-and-download -I 1 --filename=/home/crowd/output/frame%06n.jpg` -Dependencies ------------- +The modified 'webcam demo' to analyse and generate json: -*Windows* -- Affdex SDK 3.4 (64 bit) -- Visual Studio 2013 or higher - -*Linux* -- Ubuntu 14.04 or CentOS 7 -- Affdex SDK 3.2 -- CMake 2.8 or higher -- GCC 4.8 - -*Additional dependencies* - -- OpenCV 2.4 -- Boost 1.55 -- libuuid -- libcurl -- libopenssl - -Installation ------------- - - -*Windows* -- Download Affdex SDK [from here](https://knowledge.affectiva.com/docs/getting-started-with-the-emotion-sdk-for-windows) -- Install the SDK using MSI installer. -- The additional dependencies get installed automatically by NuGet. - -*Ubuntu* -- Download Affdex SDK [from here](https://knowledge.affectiva.com/docs/getting-started-with-the-affectiva-sdk-for-linux) - -```bashrc -sudo apt-get install build-essential libopencv-dev libboost1.55-all-dev libcurl4-openssl uuid-dev cmake -wget https://download.affectiva.com/linux/affdex-cpp-sdk-3.2-20-ubuntu-xenial-xerus-64bit.tar.gz -mkdir $HOME/affdex-sdk -tar -xzvf affdex-cpp-sdk-3.2-20-ubuntu-xenial-xerus-64bit.tar.gz -C $HOME/affdex-sdk -export AFFDEX_DATA_DIR=$HOME/affdex-sdk/data -git clone https://github.com/Affectiva/cpp-sdk-samples.git $HOME/sdk-samples -mkdir $HOME/build -cd $HOME/build -cmake -DOpenCV_DIR=/usr/ -DBOOST_ROOT=/usr/ -DAFFDEX_DIR=$HOME/affdex-sdk $HOME/sdk-samples -make -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/affdex-sdk/lib -``` - -*CentOS* -- Download Affdex SDK [from here](https://knowledge.affectiva.com/docs/getting-started-with-the-affectiva-sdk-for-linux) - -```bashrc -sudo yum install libcurl-devel.x86_64 libuuid-devel.x86_64 opencv-devel cmake.x86_64 -wget https://sourceforge.net/projects/boost/files/boost/1.55.0/boost_1_55_0.tar.gz/download -O boost_1_55_0.tar.gz -tar -xzvf boost_1_55_0.tar.gz -C $HOME -cd boost_1_55 -./bootstrap.sh --with-libraries=log,serialization,system,date_time,filesystem,regex,timer,chrono,thread,program_options -sudo ./b2 link=static install -wget https://download.affectiva.com/linux/affdex-cpp-sdk-3.2-2893-centos-7-64bit.tar.gz -mkdir $HOME/affdex-sdk -tar -xzvf affdex-cpp-sdk-3.2-2893-centos-7-64bit.tar.gz -C $HOME/affdex-sdk -export AFFDEX_DATA_DIR=$HOME/affdex-sdk/data -git clone https://github.com/Affectiva/cpp-sdk-samples.git $HOME/sdk-samples -mkdir $HOME/build -cd $HOME/build -cmake -DOpenCV_DIR=/usr/ -DBOOST_ROOT=/usr/ -DAFFDEX_DIR=$HOME/affdex-sdk $HOME/sdk-samples -make -export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HOME/affdex-sdk/lib -``` - -OpenCV-webcam-demo (c++) ------------------- - -Project for demoing the [FrameDetector class](https://knowledge.affectiva.com/docs/analyze-a-video-frame-stream-3). It grabs frames from the camera, analyzes them and displays the results on screen. - -The following command line arguments can be used to run it: - - -h [ --help ] Display this help message. - -d [ --data ] arg (=data) Path to the data folder - -r [ --resolution ] arg (=640 480) Resolution in pixels (2-values): width - height - --pfps arg (=30) Processing framerate. - --cfps arg (=30) Camera capture framerate. - --bufferLen arg (=30) process buffer size. - --cid arg (=0) Camera ID. - --faceMode arg (=0) Face detector mode (large faces vs small - faces). - --numFaces arg (=1) Number of faces to be tracked. - --draw arg (=1) Draw metrics on screen. - -Video-demo (c++) ----------- - -Project for demoing the Windows SDK [VideoDetector class](https://knowledge.affectiva.com/docs/analyze-a-recorded-video-file) and [PhotoDetector class](https://knowledge.affectiva.com/docs/analyze-a-photo-4). It processs video or image files, displays the emotion metrics and exports the results in a csv file. - -The following command line arguments can be used to run it: - - -h [ --help ] Display this help message. - -d [ --data ] arg (=data) Path to the data folder - -i [ --input ] arg Video or photo file to process. - --pfps arg (=30) Processing framerate. - --draw arg (=1) Draw video on screen. - --faceMode arg (=1) Face detector mode (large faces vs small - faces). - --numFaces arg (=1) Number of faces to be tracked. - --loop arg (=0) Loop over the video being processed. - - -For an example of how to use Affdex in a C# application .. please refer to [AffdexMe](https://github.com/affectiva/affdexme-win) +`/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo --data /home/crowd/affdex-sdk/data --faceMode 1 --numFaces 80 -o /home/crowd/output-backup/ --draw 0` diff --git a/common/LoggingImageListener.hpp b/common/LoggingImageListener.hpp new file mode 100644 index 0000000..84cb33b --- /dev/null +++ b/common/LoggingImageListener.hpp @@ -0,0 +1,139 @@ +#pragma once + + +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include +#include + + + +#include "ImageListener.h" + + +using namespace affdex; + +/** + * TOdo: make sure this handles logging to json on onImageResults() + */ +class LoggingImageListener : public ImageListener +{ + + std::mutex mMutex; + std::deque > > mDataArray; + + double mCaptureLastTS; + double mCaptureFPS; + double mProcessLastTS; + double mProcessFPS; + std::ofstream &fStream; + std::chrono::time_point mStartT; + const bool mDrawDisplay; + const int spacing = 10; + const float font_size = 0.5f; + const int font = cv::FONT_HERSHEY_COMPLEX_SMALL; + + std::vector expressions; + std::vector emotions; + std::vector emojis; + std::vector headAngles; + + std::map glassesMap; + std::map genderMap; + std::map ageMap; + std::map ethnicityMap; + +public: + + + LoggingImageListener(std::ofstream &csv, const bool draw_display) + : fStream(csv), mDrawDisplay(draw_display), mStartT(std::chrono::system_clock::now()), + mCaptureLastTS(-1.0f), mCaptureFPS(-1.0f), + mProcessLastTS(-1.0f), mProcessFPS(-1.0f) + { + expressions = { + "smile", "innerBrowRaise", "browRaise", "browFurrow", "noseWrinkle", + "upperLipRaise", "lipCornerDepressor", "chinRaise", "lipPucker", "lipPress", + "lipSuck", "mouthOpen", "smirk", "eyeClosure", "attention", "eyeWiden", "cheekRaise", + "lidTighten", "dimpler", "lipStretch", "jawDrop" + }; + + emotions = { + "joy", "fear", "disgust", "sadness", "anger", + "surprise", "contempt", "valence", "engagement" + }; + + headAngles = { "pitch", "yaw", "roll" }; + + + emojis = std::vector { + "relaxed", "smiley", "laughing", + "kissing", "disappointed", + "rage", "smirk", "wink", + "stuckOutTongueWinkingEye", "stuckOutTongue", + "flushed", "scream" + }; + + genderMap = std::map { + { affdex::Gender::Male, "male" }, + { affdex::Gender::Female, "female" }, + { affdex::Gender::Unknown, "unknown" }, + + }; + + glassesMap = std::map { + { affdex::Glasses::Yes, "yes" }, + { affdex::Glasses::No, "no" } + }; + + ageMap = std::map { + { affdex::Age::AGE_UNKNOWN, "unknown"}, + { affdex::Age::AGE_UNDER_18, "under 18" }, + { affdex::Age::AGE_18_24, "18-24" }, + { affdex::Age::AGE_25_34, "25-34" }, + { affdex::Age::AGE_35_44, "35-44" }, + { affdex::Age::AGE_45_54, "45-54" }, + { affdex::Age::AGE_55_64, "55-64" }, + { affdex::Age::AGE_65_PLUS, "65 plus" } + }; + + ethnicityMap = std::map { + { affdex::Ethnicity::UNKNOWN, "unknown"}, + { affdex::Ethnicity::CAUCASIAN, "caucasian" }, + { affdex::Ethnicity::BLACK_AFRICAN, "black african" }, + { affdex::Ethnicity::SOUTH_ASIAN, "south asian" }, + { affdex::Ethnicity::EAST_ASIAN, "east asian" }, + { affdex::Ethnicity::HISPANIC, "hispanic" } + }; + } + + + void onImageResults(std::map faces, Frame image) override + { + std::lock_guard lg(mMutex); + mDataArray.push_back(std::pair>(image, faces)); + std::chrono::time_point now = std::chrono::system_clock::now(); + std::chrono::milliseconds milliseconds = std::chrono::duration_cast(now - mStartT); + double seconds = milliseconds.count() / 1000.f; + mProcessFPS = 1.0f / (seconds - mProcessLastTS); + mProcessLastTS = seconds; + }; + + void onImageCapture(Frame image) override + { + std::lock_guard lg(mMutex); + mCaptureFPS = 1.0f / (image.getTimestamp() - mCaptureLastTS); + mCaptureLastTS = image.getTimestamp(); + }; + + +}; diff --git a/common/PlottingImageListener.hpp b/common/PlottingImageListener.hpp index 9e9a5f4..c6aefed 100644 --- a/common/PlottingImageListener.hpp +++ b/common/PlottingImageListener.hpp @@ -332,7 +332,7 @@ public: cv::putText(img, fps_str, cv::Point(img.cols - 110, img.rows - left_margin - spacing), font, font_size, clr); sprintf(fps_str, "process fps: %2.0f", mProcessFPS); cv::putText(img, fps_str, cv::Point(img.cols - 110, img.rows - left_margin), font, font_size, clr); - + cv::namedWindow("analyze video", CV_WINDOW_NORMAL); cv::imshow("analyze video", img); std::lock_guard lg(mMutex); cv::waitKey(30); diff --git a/opencv-webcam-demo/opencv-webcam-demo.cpp b/opencv-webcam-demo/opencv-webcam-demo.cpp index 3780656..5ea08de 100644 --- a/opencv-webcam-demo/opencv-webcam-demo.cpp +++ b/opencv-webcam-demo/opencv-webcam-demo.cpp @@ -14,6 +14,7 @@ #include "AFaceListener.hpp" #include "PlottingImageListener.hpp" +#include "LoggingImageListener.hpp" #include "StatusListener.hpp" @@ -49,9 +50,9 @@ FeaturePoint maxPoint(VecFeaturePoint points) std::string getAsJson(int framenr, const std::map faces, const double timeStamp) { std::stringstream ss; - ss << "{" << "'t':" << timeStamp << ","; - ss << "'nr':" << framenr << ","; - ss << "'faces':["; + ss << "{" << "\"t\":" << timeStamp << ","; + ss << "\"nr\":" << framenr << ","; + ss << "\"faces\":["; int i(0); @@ -78,7 +79,7 @@ std::string getAsJson(int framenr, const std::map faces, const dou float *values = (float *)&f.measurements.orientation; for (std::string angle : { "pitch", "yaw", "roll" }) { - ss << "'" << angle << "':" << (*values) << ","; + ss << "\"" << angle << "\":" << (*values) << ","; values++; } @@ -88,7 +89,7 @@ std::string getAsJson(int framenr, const std::map faces, const dou "surprise", "contempt", "valence", "engagement" }) { - ss << "'" << emotion << "':" << (*values) << ","; + ss << "\"" << emotion << "\":" << (*values) << ","; values++; } @@ -100,18 +101,18 @@ std::string getAsJson(int framenr, const std::map faces, const dou "lidTighten", "dimpler", "lipStretch", "jawDrop" }) { - ss << "'" << expression << "':" << (*values) << ","; + ss << "\"" << expression << "\":" << (*values) << ","; values++; } FeaturePoint tl = minPoint(f.featurePoints); FeaturePoint br = maxPoint(f.featurePoints); - ss << "'rect':{'x':" << tl.x << ",'y':" << tl.y - << ",'w':" << (br.x - tl.x) << ",'h':" << (br.y - tl.y) << "},"; + ss << "\"rect\":{\"x\":" << tl.x << ",\"y\":" << tl.y + << ",\"w\":" << (br.x - tl.x) << ",\"h\":" << (br.y - tl.y) << "},"; - ss << "'ioDistance':"<< f.measurements.interocularDistance << ","; - ss << "'id':"<< f.id; + ss << "\"ioDistance\":"<< f.measurements.interocularDistance << ","; + ss << "\"id\":"<< f.id; ss << "}"; } @@ -138,12 +139,12 @@ int main(int argsc, char ** argsv) std::vector resolution; int process_framerate = 30; - int camera_framerate = 15; int buffer_length = 2; - int camera_id = 0; unsigned int nFaces = 1; bool draw_display = true; - int faceDetectorMode = (int)FaceDetectorMode::LARGE_FACES; + int faceDetectorMode = (int)FaceDetectorMode::SMALL_FACES; + boost::filesystem::path imgPath("~/emo_in_file.jpg"); + boost::filesystem::path outPath("~/output/"); float last_timestamp = -1.0f; float capture_fps = -1.0f; @@ -160,14 +161,13 @@ int main(int argsc, char ** argsv) #else // _WIN32 ("data,d", po::value< affdex::path >(&DATA_FOLDER)->default_value(affdex::path("data"), std::string("data")), "Path to the data folder") #endif // _WIN32 - ("resolution,r", po::value< std::vector >(&resolution)->default_value(DEFAULT_RESOLUTION, "640 480")->multitoken(), "Resolution in pixels (2-values): width height") ("pfps", po::value< int >(&process_framerate)->default_value(30), "Processing framerate.") - ("cfps", po::value< int >(&camera_framerate)->default_value(30), "Camera capture framerate.") ("bufferLen", po::value< int >(&buffer_length)->default_value(30), "process buffer size.") - ("cid", po::value< int >(&camera_id)->default_value(0), "Camera ID.") - ("faceMode", po::value< int >(&faceDetectorMode)->default_value((int)FaceDetectorMode::LARGE_FACES), "Face detector mode (large faces vs small faces).") + ("faceMode", po::value< int >(&faceDetectorMode)->default_value((int)FaceDetectorMode::SMALL_FACES), "Face detector mode (large faces vs small faces).") ("numFaces", po::value< unsigned int >(&nFaces)->default_value(1), "Number of faces to be tracked.") ("draw", po::value< bool >(&draw_display)->default_value(true), "Draw metrics on screen.") + //~ ("file,f", po::value< boost::filesystem::path >(&imgPath)->default_value(imgPath), "Filename of image that is watched/tracked for changes.") + ("frameOutput,o", po::value< boost::filesystem::path >(&outPath)->default_value(outPath), "Directory to store the frame in (and json)") ; po::variables_map args; try @@ -194,14 +194,11 @@ int main(int argsc, char ** argsv) std::cerr << description << std::endl; return 1; } - if (resolution.size() != 2) + if (!boost::filesystem::exists(outPath)) { - std::cerr << "Only two numbers must be specified for resolution." << std::endl; - return 1; - } - else if (resolution[0] <= 0 || resolution[1] <= 0) - { - std::cerr << "Resolutions must be positive number." << std::endl; + std::cerr << "Folder doesn't exist: " << outPath.native() << std::endl << std::endl;; + std::cerr << "Try specifying the output folder through the command line" << std::endl; + std::cerr << description << std::endl; return 1; } @@ -222,29 +219,9 @@ int main(int argsc, char ** argsv) frameDetector->setImageListener(listenPtr.get()); frameDetector->setFaceListener(faceListenPtr.get()); frameDetector->setProcessStatusListener(videoListenPtr.get()); - - /*std::string cameraPipeline; - cameraPipeline ="v4l2src device=/dev/video0 extra-controls=\"c,exposure_auto=1,exposure_absolute=500\" ! "; - cameraPipeline+="video/x-raw, format=BGR, framerate=30/1, width=(int)1280,height=(int)720 ! "; - cameraPipeline+="appsink"; - - cv::VideoCapture webcam; - webcam.open(cameraPipeline);*/ - cv::VideoCapture webcam(camera_id); //Connect to the first webcam - std::cerr << "Camera: " << camera_id << std::endl; - std::cerr << "- Setting the frame rate to: " << camera_framerate << std::endl; - //~ webcam.set(CV_CAP_PROP_FPS, camera_framerate); //Set webcam framerate. - std::cerr << "- Setting the resolution to: " << resolution[0] << "*" << resolution[1] << std::endl; - webcam.set(CV_CAP_PROP_FRAME_HEIGHT, 240); - webcam.set(CV_CAP_PROP_FRAME_WIDTH, 320); auto start_time = std::chrono::system_clock::now(); - if (!webcam.isOpened()) - { - std::cerr << "Error opening webcam!" << std::endl; - return 1; - } - + std::cout << "Max num of faces set to: " << frameDetector->getMaxNumberFaces() << std::endl; std::string mode; switch (frameDetector->getFaceDetectorMode()) @@ -262,34 +239,48 @@ int main(int argsc, char ** argsv) //Start the frame detector thread. frameDetector->start(); - int framenr = 0; - do{ + + int frameNrIn = 1; + int frameNrOut = 1; + std::time_t lastImgUpdate(0); + while(true){ //(cv::waitKey(20) != -1); + char buff[100]; + snprintf(buff, sizeof(buff), "frame%06d.jpg", frameNrIn); + boost::filesystem::path imgPath = outPath / buff; + if ( !boost::filesystem::exists( imgPath.native() )|| frameNrIn > frameNrOut ) { + // wait for file to appear + // and for the in file to be parsed (frame out) + usleep(5000); // wait 1/20 sec to avoid useless fast loop + } else { + std::cerr << "Read " << imgPath.native() << std::endl; + + char buff[100]; + snprintf(buff, sizeof(buff), "frame%06d.json", frameNrIn); + boost::filesystem::path jsonPath = outPath / buff; + + // don't redo existing jsons + if( !boost::filesystem::exists( jsonPath.native() )) { + cv::Mat img = imread(imgPath.native(), 1); - /* cv::Mat img; - if (!webcam.read(img)) //Capture an image from the camera - { - std::cerr << "Failed to read frame from webcam! " << std::endl; - break; - }*/ - std::string infile = "/home/crowd/IMG_0011.JPG"; - cv::Mat img = imread(infile, 1); - - //~ imread(img); + //Calculate the Image timestamp and the capture frame rate; + const auto milliseconds = std::chrono::duration_cast(std::chrono::system_clock::now() - start_time); + const double seconds = milliseconds.count() / 1000.f; - //Calculate the Image timestamp and the capture frame rate; - const auto milliseconds = std::chrono::duration_cast(std::chrono::system_clock::now() - start_time); - const double seconds = milliseconds.count() / 1000.f; - - // Create a frame - Frame f(img.size().width, img.size().height, img.data, Frame::COLOR_FORMAT::BGR, seconds); - capture_fps = 1.0f / (seconds - last_timestamp); - last_timestamp = seconds; - frameDetector->process(f); //Pass the frame to detector + // Create a frame + Frame f(img.size().width, img.size().height, img.data, Frame::COLOR_FORMAT::BGR, seconds); + capture_fps = 1.0f / (seconds - last_timestamp); + last_timestamp = seconds; + frameDetector->process(f); //Pass the frame to detector + } else { + frameNrOut ++; // this won't happen later, but nr. should stay equal if skipping items. + } - // For each frame processed + frameNrIn++; + } + + // For each frame processed (returns async) if (listenPtr->getDataSize() > 0) { - framenr++; std::pair > dataPoint = listenPtr->getData(); Frame frame = dataPoint.first; @@ -301,36 +292,23 @@ int main(int argsc, char ** argsv) listenPtr->draw(faces, frame); } - // std::cerr << "timestamp: " << frame.getTimestamp() - // << " cfps: " << listenPtr->getCaptureFrameRate() - // << " pfps: " << listenPtr->getProcessingFrameRate() - // << " faces: " << faces.size() << endl; + std::string json = getAsJson(frameNrOut, faces, frame.getTimestamp()); + std::cout << json << std::endl; + + // store json + char buff[100]; + snprintf(buff, sizeof(buff), "frame%06d.json", frameNrOut); + boost::filesystem::path targetFilename = outPath / buff; + std::ofstream out(targetFilename.native()); + std::cerr << "write "<< targetFilename.native() << std::endl; + out << json << "\n"; + out.close(); - //Output metrics to the file - //listenPtr->outputToFile(faces, frame.getTimestamp()); - - std:cout << getAsJson(framenr, faces, frame.getTimestamp()) << std::endl; - - char buff[100]; - snprintf(buff, sizeof(buff), "frame%06d.jpg", framenr); - std::string targetFilename = buff; // convert to std::string - - vector compression_params; - compression_params.push_back(CV_IMWRITE_JPEG_QUALITY); - compression_params.push_back(90); - - imwrite(targetFilename, img, compression_params); - - break; + frameNrOut++; } } -#ifdef _WIN32 - while (!GetAsyncKeyState(VK_ESCAPE) && videoListenPtr->isRunning()); -#else // _WIN32 - while (videoListenPtr->isRunning());//(cv::waitKey(20) != -1); -#endif std::cerr << "Stopping FrameDetector Thread" << endl; frameDetector->stop(); //Stop frame detector thread } diff --git a/parse_output.py b/parse_output.py new file mode 100644 index 0000000..6ace657 --- /dev/null +++ b/parse_output.py @@ -0,0 +1,111 @@ +import os +from PIL import Image, ImageDraw +import argparse +import json + + +parser = argparse.ArgumentParser(description='Parses opencv-webcam-demo json output files and collects statistics') +parser.add_argument('--frameOutput', '-o', required=True, help='directory to look for frames & json') + +args = parser.parse_args() + +class Face: + def __init__(self, frame, data): + self.id = data['id'] + self.frame = frame # Frame class + self.data = data # json data + + def getFaceImg(self): + r = self.data['rect'] + return self.frame.getImg().crop((int(r['x']), int(r['y']), int(r['x']+r['w']), int(r['y']+r['h']))) + +class Frame: + """ + Everything for an analysed frame + """ + def __init__(self, outputPath, nr): + self.outputPath = outputPath + self.nr = nr + self.name = "frame%06d" % nr + self.jsonPath = os.path.join(outputPath, ("frame%06d" % (nr)) + ".json") + self.imgPath = os.path.join(outputPath, self.name + ".jpg") + self.faces = None # init with getFaces + + def getTime(self): + return os.path.getmtime(self.imgPath) + + def getJson(self): + #~ try: + with open(self.jsonPath) as fp: + return json.load(fp) + #~ except Exception as e: + #~ # no json file yet? + #~ return None + + def getImg(self): + return Image.open(self.imgPath) + + def getFaces(self): + if self.faces is None: + j = self.getJson() + + self.faces = [Face(self, f) for f in j['faces']] + + return self.faces + + def exists(self): + return os.path.exists(self.jsonPath) and os.path.exists(self.imgPath) + +frames = {} + +def loadFrames(frameDir): + global frames + nr = 2 + nextFrame = Frame(frameDir, nr) + # TODO; make threaded and infinite loop that updates global frames + while nextFrame.exists(): + frames[nr] = nextFrame + nr+=1 + nextFrame = Frame(frameDir, nr) + return frames + +def cutOutFaces(frame, targetDir): + for faceNr, face in enumerate(frame.getFaces()): + print(faceNr, face) + img = face.getFaceImg() + faceImgPath = os.path.join(targetDir, frame.name + "-%s.jpg" % face.id) + print(faceImgPath) + img.save(faceImgPath) + pass + +frames = loadFrames(args.frameOutput) + +lastTime = None +for frameNr, frame in frames.items(): + thisTime = frame.getJson()['t'] + #~ print(frameNr, thisTime) + if not (lastTime is None) and lastTime > thisTime: + print "ERRROR!!" + lastTime = thisTime + +faceDir = os.path.join(args.frameOutput, 'faces') + +if not os.path.exists(faceDir): + os.mkdir(faceDir) + +def sumEmotions(): + total = 0. + summed = 0. + items = 0 + for frameNr, frame in frames.items(): + for face in frame.getFaces(): + total += abs(face.data['valence']) + summed += face.data['valence'] + items += 1 + + average = summed / items + print ("Total emotion %d, positivity score %d (average: %s)" % (total, summed, average)) + +sumEmotions() +#~ for frameNr, frame in frames.items(): + #~ cutOutFaces(frame, faceDir) diff --git a/run.py b/run.py index e6eb6ac..ec62309 100644 --- a/run.py +++ b/run.py @@ -1,45 +1,87 @@ #sudo ~/build/opencv-webcam-demo/opencv-webcam-demo --data ~/affdex-sdk/data --faceMode 1 --numFaces 40 --draw 1 - +#sudo ~/build/opencv-webcam-demo/opencv-webcam-demo --data ~/affdex-sdk/data --faceMode 1 --numFaces 100 -o ~/output -f ~/emo_in_file.jpg import subprocess -from SimpleWebSocketServer import SimpleWebSocketServer, WebSocket +import json -proc = subprocess.Popen([ - '/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo', - "--data", "/home/crowd/affdex-sdk/data", - "--faceMode", "1", - "--numFaces", "40", - "--draw", "1", - "--pfps", "5", - "--cfps", "5", - ],stdout=subprocess.PIPE, stderr=subprocess.STDOUT) +import threading +import logging -clients = [] -class EchoOutput(WebSocket): +logging.basicConfig(level=logging.DEBUG, + format='(%(threadName)-10s) %(message)s', + ) -# def handleMessage(self): -# # echo message back to client -# self.sendMessage(self.data) +outputDir = "/home/crowd/output" +tmpImgFile = "/home/crowd/emo_in_file.jpg" - def handleConnected(self): - clients.append(self) - print(self.address, 'connected') - def handleClose(self): - clients.remove(self) - print(self.address, 'closed') - -server = SimpleWebSocketServer('', 8080, EchoOutput) +def handleLine(msg): + try: + j = json.loads(msg) + except Exception as e: + logging.error("Couldn't parse json " + msg) + return + + #now we have json + logging.debug(j) + +print " ".join([ + 'gphoto2', + "--port", "usb:", + "--capture-image-and-download", + "-I", "1", # photo every second + "--filename="+tmpImgFile, "--force-overwrite", + ]) +print " ".join([ + '/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo', + "--data", "/home/crowd/affdex-sdk/data", + "--faceMode", "1", + "--numFaces", "40", + "--draw", "1", + "-o", outputDir, + "-f", tmpImgFile, + ]) + -def send_message(msg): - print "send", msg, "to", len(clients), "clients" - for client in list(clients): - client.sendMessage(u''+msg) +# gphoto2 --port usb: --capture-image-and-download -I 1 --filename=~/test.jpg --force-overwrite +def captureImages(): + procCapture = subprocess.Popen([ + 'gphoto2', + "--port", "usb:", + "--capture-image-and-download", + "-I", "1", # photo every second + "--filename="+tmpImgFile, "--force-overwrite", + ],stdout=subprocess.PIPE, stderr=subprocess.STDOUT) + while procCapture.poll() is None: + line = procCapture.stdout.readline() + if line == '': + continue + logging.debug(line) + if line.startswith("*** Error"): + raise Exception("Camera not found on USB, or unable to claim it") + return -while proc.poll() is None: - server.serveonce() - line = proc.stdout.readline() - if line == '': - continue - send_message(line) - #print "test:", line.rstrip() - +def processImages(): + procProcess = subprocess.Popen([ + '/home/crowd/build/opencv-webcam-demo/opencv-webcam-demo', + "--data", "/home/crowd/affdex-sdk/data", + "--faceMode", "1", + "--numFaces", "40", + "--draw", "1", + "-o", outputDir, + "-f", tmpImgFile, + ],stdout=subprocess.PIPE, stderr=subprocess.PIPE) + while procProcess.poll() is None: + line = procProcess.stdout.readline() + if line == '': + continue + + handleLine(line) + return + + + +captureThread = threading.Thread(name='capture', target=captureImages) +processThread = threading.Thread(name='process', target=processImages) + +captureThread.start() +processThread.start()