Object detect module#1570
Conversation
skimage/future/__init__.py
Outdated
There was a problem hiding this comment.
I'd call this module detection. Any thoughts from others ?
There was a problem hiding this comment.
Definitely start with detect. I'd be fine with detect, detect_object, and probably others.
There was a problem hiding this comment.
I agree with @JDWarner, go with detect. It mirrors most of our API, with some well-justified exceptions (filters) and others not so well justified, that we might consider updating for 1.0: segment, restore, view.
There was a problem hiding this comment.
Yes, agreed @jni. Simple present verb tense wherever possible.
|
@warmspringwinds This is shaping up really well. Somethings I'd like to see addressed. Some of the old-timers ( if I may say so 😄 ) can help me here
|
|
|
@warmspringwinds |
|
@vighneshbirodkar |
|
@warmspringwinds I mean, we should never have 2 implantations of MB LBP detection. |
skimage/future/objdetect/cascade.pyx
Outdated
There was a problem hiding this comment.
Don't put more than one blank line in a row within methods/functions.
|
@vighneshbirodkar @warmspringwinds sorry I don't quite understand the three functions discussion. Our general approach has been to have a cython inner function for performance critical bits, and a pure python wrapper. Is that an option here? |
|
I also don't know anything about the xml but pre-trained models are in general a very good idea imho. Python in general is a "batteries included" language, so users would be right to expect such convenience. And I rarely favour the creation of a new format, so if something exists that we can parse easily enough, use that. I hope that helps a bit, sorry if I missed the point but I don't have time right now to delve into the history of this! Thanks both of you for your efforts! |
|
@jni The other way to solve it is to put the function-wrapper in So, we came up with an ugly solution: to use tree functions: |
|
This is exactly the use case cpdef was designed for. What are the downsides of letting Cython handle the dual definition for us again, @vighneshbirodkar? There is a clear argument here for simplicity/conciseness? |
skimage/future/objdetect/setup.py
Outdated
|
@vighneshbirodkar @warmspringwinds I'm afraid I agree with @JDWarner. @vighneshbirodkar, your link specifies the performance tradeoff when subclassing a function, but I don't think it would ever apply here. The answer shows when you wouldn't want to use At the very least, if we want to use three functions, we would want to see benchmarks showing a clear time advantage to the three function solution. |
|
@warmspringwinds I agree that |
|
@vighneshbirodkar @warmspringwinds I didn't know about the nogil restriction by cpdef. That would indeed be a compelling reason not to use it. |
There was a problem hiding this comment.
Don't harcode, use omp_get_max_threads
See http://www.plutospin.com/files/OpenMP_reference.pdf and http://docs.cython.org/src/userguide/parallelism.html#using-openmp-functions
On my system this evaluates to 4.
There was a problem hiding this comment.
@vighneshbirodkar
Yes, on my system it's 4 too. But there is one interesting thing.
When I set the amount of threads to 10 manually, it works faster.
There was a problem hiding this comment.
What processor do you have ?
There was a problem hiding this comment.
@vighneshbirodkar
Intel(R) Core(TM) i5-4200U CPU @ 1.60GHz
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 4
On-line CPU(s) list: 0-3
Thread(s) per core: 2
Core(s) per socket: 2
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 69
Stepping: 1
CPU MHz: 759.000
BogoMIPS: 4589.61
Virtualization: VT-x
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 3072K
NUMA node0 CPU(s): 0-3
There was a problem hiding this comment.
@vighneshbirodkar
So far, when I test parallel scale computation it's much slower then the parallel image computation. Even though I am not doing any result saving in the scale parallelization part, and the parallel image computation is benchmarked with saving.
There was a problem hiding this comment.
@vighneshbirodkar
wow, just played with the schedule parameter. Set it to dynamic, started to work faster even with low amount of threads. I guess I will read more about this thing.
There was a problem hiding this comment.
@vighneshbirodkar this occurs because when we spawn threads for each scale they have unequal amount of work (the first scales are the hardest ones) and, therefore, we need to allow them to take up more resources when they need it.
So, we stick with scale threading and dynamic. I hope that my tests were right :)
There was a problem hiding this comment.
@warmspringwinds I don't think your PC can run 8 threads.You have 4 CPUs, not 4 cores.
http://ark.intel.com/products/75459/Intel-Core-i5-4200U-Processor-3M-Cache-up-to-2_60-GHz
There was a problem hiding this comment.
@vighneshbirodkar Yes, my bad, I made a mistake.
I have a strange behavior in my code.
I did the scale loop, but now every time I get different results and I can't figure out why.
I enclosed my files here, they can be compiled directly:
https://gist.github.com/warmspringwinds/8a1348038df52586905c
Please, have a look if you have time.
|
@warmspringwinds This is fantastic, I'm sure with some optimizations, we will have a really fast parallel face detector. |
ba2628b to
a5809eb
Compare
|
Just faced the issue #867 (comment) |
|
Seems to compile fine. I get the following error when I run the tests, but On Tue, Jan 5, 2016 at 4:52 PM, Daniil Pakhomov notifications@github.com
|
|
@arokem Yeah, it's relevant. Does it give you the same result when you use your trick with clang-omp for compiling? |
|
Thanks, Ariel, for giving a hand here. Daniil, I am away until Sunday, but happy to organize an OSX test account for you then. |
|
@stefanv That would be great! |
doc/examples/plot_face_detection.py
Outdated
| # Initialize the detector cascade. | ||
| detector = detect.Cascade(trained_file) | ||
|
|
||
| img = data.lena() |
There was a problem hiding this comment.
We should make sure, lena doesn't go in again.
There was a problem hiding this comment.
There was a problem hiding this comment.
@stefanv Sure.
I have been mostly working first to make it compile on clang.
Is it still possible to get access to machine with clang?
|
I don't know the history of this but @warmspringwinds asked me to test this with CLang and I did and it works fine! (ana3)
jni@airbears2-10-142-107-176 Thu Jun 09 18:14
~/projects/scikit-image (objdetect)$ nosetests skimage/future/detect/tests/test_cascade.py
..
----------------------------------------------------------------------
Ran 2 tests in 2.984s
OK |
|
@JDWarner I have removed the lena image, updated documentation. First image is an .xml file for frontal human face. |
|
:D |
|
excited for this :D |
|
@warmspringwinds huh? Since when do we host notebooks, let alone notebooks in the root dir? The kitty face detector is awesome. |
|
Thank you all for your work on this module. |
|
@warmspringwinds Is the feature ready to be merged in scikit-image? I am really excited to have the feature in scikit-image. |
|
What's the state of this? CI already shows some bitrot. |
|
@warmspringwinds This may now be our longest living PR :) Let's revisit:
|
|
Yeah, let's finish this up :D The detection part is ready and if you could assign someone to help me out with conditional openmp support I think we can finish it up quickly. I will also update the PR with respect to recent changes in scikit-image in the following days. Let me know what you think. |
|
Just jumping in (I might need a cascade classifier somewhere).
|
|
@glemaitre as a heavy mac user, I would be upset if we required gcc for compilation. It's not trivial to get gcc working in macOS. (Actually even getting clang is kind of annoying.) I also like to support different compilers, which might have different optimisation characteristics. |
Good point. |
|
Superceded by #3267. |



I marked up the future module.
Changed the
mb_lbpfunction to work faster.