[FR] Allow argument for 'local_dst_path' when loading Pyfunc Models #4154
Comments
|
@ameyaparab28 Thanks for filing this feature request. By local filesystem, you mean FS available on your production server machine. Do you want to load those models from that local FS path on the production server's FS rather than loading it from the S3 for each model? This seems like a good first issue since most of what you suggest might work. Do you want to contribute? cc: @harupy |
|
@dmatrix @harupy - Yes, by local file system I meant the file system on the Production Server. The reasoning behind opening the FR, was that we would ideally like to download the model/s once from the remote URI to the Production Server's file system and use it multiple times in different applications. I can take up the issue, but would require some guidance for testing and merging it onto the master branch. |
|
Hey @dmatrix I'd like to make my first contribution if the issue is free |
|
@ankh6 Yes, we welcome contributions. This is an API change so we have to ensure backward compatibility too. |
|
@dmatrix, I was planning on working on the issue and would create a PR in a couple of weeks. Thanks! |
|
@dmatrix ...I was working on running through the linter and unit tests before raising a PR but ran into a bunch of Package import and other errors. I have pasted the bash output below: source lint.sh ========== pycodestyle ==========
mlflow tests:1:1: E902 FileNotFoundError: [Errno 2] No such file or directory: 'mlflow tests'
========== pylint ==========
Using config file /Users/parabam1/Projects/OpenSource/mlflow/pylintrc
************* Module mlflow.mlflow
mlflow/__init__.py (30,0): [E0611 no-name-in-module] No name 'version' in module 'mlflow'
.
.
.
========== rstcheck ==========
Traceback (most recent call last):
File "/Users/parabam1/miniconda3/envs/mlflow-dev-env/bin/rstcheck", line 8, in <module>
sys.exit(main())
File "/Users/parabam1/miniconda3/envs/mlflow-dev-env/lib/python3.6/site-packages/rstcheck.py", line 918, in main
with enable_sphinx_if_possible():
File "/Users/parabam1/miniconda3/envs/mlflow-dev-env/lib/python3.6/contextlib.py", line 81, in __enter__
return next(self.gen)
File "/Users/parabam1/miniconda3/envs/mlflow-dev-env/lib/python3.6/site-packages/rstcheck.py", line 876, in enable_sphinx_if_possible
status=None)
File "/Users/parabam1/miniconda3/envs/mlflow-dev-env/lib/python3.6/site-packages/sphinx/application.py", line 172, in __init__
raise ApplicationError(__('Source directory and destination '
sphinx.errors.ApplicationError: Source directory and destination directory cannot be identical
One of the previous steps failed, check aboveI have setup my development env using the instructions mentioned over here. |
|
@ameya-parab can you try resync with the master and merge changes into your branch? |
Willingness to contribute
The MLflow Community encourages new feature contributions. Would you or another member of your organization be willing to contribute an implementation of this feature (either as an MLflow Plugin or an enhancement to the MLflow code base)?
Proposal Summary
Currently, the 'mlflow.pyfunc.load_model(MODEL_URI)' just accepts the remote model URI (S3 in our case) when trying to load a Python flavored MLFlow model and its artifacts. As part of this call, the load_model method, downloads the artifacts registered when logging the MLFLow model to a temporary directory in the local filesystem for serving. I would like to open a feature request, to allow specifying a local path when calling the load_model function, this would enable the users to download the artifacts and the model to a specific location for further analysis.
Motivation
- Enables downloading the remote model and its artifacts to a specified location which caters to reduced model loading
times as the model is directly loaded from a local file path and can be reused by other programs, if required.
- The feature will reduce the overall model serving time for large models as the artifacts and the model itself would be
available at a local filesystem path for multiple programs to use.
- This will help us to tackle the long loading times when iniializing the model serving framework as our production
deployment consists of ensemble of models rather than a single model.
- We have a workaround in place to shutil the models and their artifacts from the temporary directories to a different
location everytime it is initialized by a program, this is not optimized and can be error prone.
What component(s), interfaces, languages, and integrations does this feature affect?
Components
area/artifacts: Artifact stores and artifact loggingarea/build: Build and test infrastructure for MLflowarea/docs: MLflow documentation pagesarea/examples: Example codearea/model-registry: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models: MLmodel format, model serialization/deserialization, flavorsarea/projects: MLproject format, project running backendsarea/scoring: Local serving, model deployment tools, spark UDFsarea/server-infra: MLflow server, JavaScript dev serverarea/tracking: Tracking Service, tracking client APIs, autologgingInterfaces
area/uiux: Front-end, user experience, JavaScript, plottingarea/docker: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows: Windows supportLanguages
language/r: R APIs and clientslanguage/java: Java APIs and clientslanguage/new: Proposals for new client languagesIntegrations
integrations/azure: Azure and Azure ML integrationsintegrations/sagemaker: SageMaker integrationsintegrations/databricks: Databricks integrationsDetails
Proposed solution
Changes in pyfunc.init.py
The text was updated successfully, but these errors were encountered: