Skip to content

[BUG] cmake hangs on hunter_unpack_directory #664

@aggiehorns

Description

@aggiehorns

This is cross posted from: luxonis/depthai-core#754

I am trying to cmake depthai-core from Luxonis, which uses hunter. I have traced it to hunter_unpack_directory.cmake line 143. Expanded I can see the following.

C:/.hunter/_Base/Download/Hunter/0.23.322/cb0ea1f/Unpacked/cmake/modules/hunter_unpack_directory.cmake(143): execute_process(COMMAND C:/Program Files/Microsoft Visual Studio/2022/Preview/Common7/IDE/CommonExtensions/Microsoft/CMake/CMake/bin/cmake.exe;-DHUNTER_INSTALL_PREFIX=C:/.hunter/_Base/cb0ea1f/04da027/7258ad3/Install;-DLIST_OF_FILES=C:/.hunter/_Base/Cellar/d43c7d359805f81df2468a0289529cc1f21fa918/d43c7d3/files.list;-DSHELL_LINK_SCRIPT=C:/.hunter/_Base/Cellar/d43c7d359805f81df2468a0289529cc1f21fa918/d43c7d3/link-all.sh;-DCELLAR_RAW_DIRECTORY=C:/.hunter/_Base/Cellar/d43c7d359805f81df2468a0289529cc1f21fa918/d43c7d3/raw;-DPYTHON_LINK_SCRIPT=C:/.hunter/_Base/Download/Hunter/0.23.322/cb0ea1f/Unpacked/scripts/link-all.py;-P;C:/.hunter/_Base/Download/Hunter/0.23.322/cb0ea1f/Unpacked/scripts/link-all.cmake WORKING_DIRECTORY C:/.hunter/_Base/Cellar/d43c7d359805f81df2468a0289529cc1f21fa918/d43c7d3 RESULT_VARIABLE result OUTPUT_VARIABLE output ERROR_VARIABLE error )

Manually running this command it appears things are stalling in link-all.py. I have confirmed this by force killing python processes which causes cmake to resume execution with a "python script failed" message.

This is hunter 0.23.322 and python 3.11. Maybe doesn't like my python version?


Digging further. link-all is throwing the following error but it's getting silently caught and race condition results.

Exception caught: [WinError 183] Cannot create a file when that file already exists: 'C:/.hunter/_Base/Cellar/f39bd3e9e4cb82ab84a570e8415d8d5262d4d86e/f39bd3e/raw\\include/nlohmann/json.hpp' -> 'C:/.hunter/_Base/cb0ea1f/04da027/7258ad3/Install\\include/nlohmann/json.hpp'


Ok I think I've figured out what it is. Two problems. 1) The script is using CPU core count to spawn threads, and my CPU exceeds the limit. This is causing the script to deadlock. 2) The script is not properly checking for files that already exist. Although that may not be a showstopper.

Suggested code changes:

list_len = len(src_list)
proc_num = min(multiprocessing.cpu_count(), 32)
files_per_job = math.ceil(list_len / proc_num)

def job(chunk):
  try:
    for filename in chunk:
      link_from = os.path.join(cmd_args.cellar, filename)
      link_to = os.path.join(cmd_args.dest, filename)
      if not os.path.exists(link_to):
        os.link(link_from, link_to)
    return 0
  except Exception as exc:
    print('Exception caught: {}'.format(exc))
    return 1

def run_link():
  chunks = [src_list[i:i + files_per_job] for i in range(0, list_len, files_per_job)]
  pool = multiprocessing.Pool(processes=len(chunks))
  result = pool.map(job, chunks)
  pool.close()
  pool.join()

  if 1 in result:
    sys.exit('Some job failed')

if __name__ == '__main__':
  run_link()

Metadata

Metadata

Assignees

Labels

bugSomething isn't working

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions