State of the art (2019) face detection with RetinaFace and MXNet

Testing deep learning neural networks on public datasets is fun, but it’s usually on unseen data that you can really see how the published techniques really perform.

Recently, I was trying to detect human faces on Game of Thrones footage. I was surprised to see that the most widely used techniques didn’t fare really well.

I first tried OpenCV HaaR Cascade detector, then Dlib HOG frontal face detector. In both cases, it worked well only in ideal cases (frontal face, no weird lighting, no occlusion). In a real-world scenario, this means useless.

I then came across this link that advertised OpenCV DNN Face detector as performant. The result was better than OpenCV HaaR Cascade detector and Dlib HOG frontal face detector, but still not really good on my Game of thrones footage. I then understood that yes, human face detection is an academically solved problem on many datasets, but many of those datasets do not reflect yet the true “in-the-wild diversity” you can have in many footages.

I then googled around to see what really is the state of the art for human face detection in 2019. I finally came across this repo and their RetinaFace network, but they didn’t provide any Dockerfile so it was a bit of a pain to install and run. I made the Dockerfile, made some tests, and the results are outstanding! On my Game of Thrones footage, their RetinaFace network performs really well, even for human faces with odd angles, occlusion, and poor lighting.

I set up a Github repository (https://github.com/francoisruty/fruty_face-detection), with a ready-made Dockerfile, short instructions and the neural network pre-trained weights. Feel free to use it to test the RetinaFace network in 5min max, on any footage!

11 thoughts on “State of the art (2019) face detection with RetinaFace and MXNet

  1. Hey there, is there any chance to make it work on iMac? cause i noticed that lot of these face detectors work with gpu or need pytorch with cuda, but thats not possible without Nvidia
    Thank you so much

    Like

    1. hello, personnally I do machine learning exclusively on linux, for the reason you just mentioned.
      In principle you should be able to make it work on mac, it’s just that it will be terribly slow

      Like

      1. Alright so can you give me any advice how to change this code of yours :
        docker run -it –runtime=nvidia -v {{dataPath}}:/data -v {{modelPath}}:/model retinaface /bin/bash

        so i could run it on mac or windows with nvidia graphics? I already have docker and everything you mentioned in github repo
        Thank you so much
        P.S. my goal is not speed i just need to make it work for one video for my school project and dateline is pretty soon :/

        Like

  2. hello, docker command should be the same, but I have no idea whether the nvidia runtime is available on mac and windows, I’m not sure, you would have to google that

    Like

  3. Looks like its not going to work without nvidia runtime…so probably no chance to make it work on pc or mac…I already tryed to reimplement your code in to new version(trying to do it with tensorflow) but had one problem with that….if you could check your github issue called “Code Reimplementation”

    Like

    1. hello, sorry, I can’t read your tensorflow reimplementation, I lack the time
      If you’re studying deep learning, I recommend you you either use suitable hardware, or fork my repo and edit the Dockerfile to run cpu versions of the DL frameworks involved (but first option is best IMO)

      Like

      1. Thank you for your answer, I understand, but because i dont have enough time to get suitable hardware(pc with ubuntu in it) is there at least any chance i could send you video file that you would run through your retinaface algorithm and send it back to me? I would be infinitely grateful and mention you in my school project.

        Like

Leave a comment