With grpcio 1.6 on ubuntu 14.04 and python 2.7, grpc client stuck when calling via python multiprocessing.
Code demonstrate the stuck:
import os
import time
import sys
import multiprocessing
from grpc.beta import implementations
import tensorflow as tf
sys.path.append('/home/ubuntu/src/python')
from protos import inception_inference_pb2
networkId = '58a23401292bd68c5302d666'
def run_grpc(network_id):
hostport = 'localhost:9000'
host, port = 'localhost', 9000
channel = implementations.insecure_channel(host, int(port))
stub = inception_inference_pb2.beta_create_InceptionService_stub(channel)
request = inception_inference_pb2.ModelRequest()
request.network_id = network_id
response = None
print('do_model_info 2 %s %s' % (hostport, network_id))
result_future = stub.ModelInfo.future(request, 2.0)
print('do_model_info 3 %s %s' % (hostport, network_id))
response = result_future.result()
print('do_model_info 4 %s %s' % (hostport, network_id))
print(response)
run_grpc(networkId)
t = multiprocessing.Process(target=run_grpc, args=(networkId,))
t.start()
Directly calling run_grpc works, however with multiprocessing, it stuck:
python grpc16_stuck.py
do_model_info 2 localhost:9000 58a23401292bd68c5302d666
do_model_info 3 localhost:9000 58a23401292bd68c5302d666
do_model_info 4 localhost:9000 58a23401292bd68c5302d666
message: "Servable not found for request: Latest(58a23401292bd68c5302d666). Failed to lookup session bundle for 58a23401292bd68c5302d666."
do_model_info 2 localhost:9000 58a23401292bd68c5302d666
do_model_info 3 localhost:9000 58a23401292bd68c5302d666
The code works fine under grpcio 1.4.0. I think any grpc service should be able to reproduce this issue.
With grpcio 1.6 on ubuntu 14.04 and python 2.7, grpc client stuck when calling via python multiprocessing.
Code demonstrate the stuck:
Directly calling run_grpc works, however with multiprocessing, it stuck:
The code works fine under grpcio 1.4.0. I think any grpc service should be able to reproduce this issue.