Conversation
Add a steamlit based demo web_demo2.py for better UI. need to install streamlit and streamlit-chat component fisrt: pip install streamlit pip install streamlit-chat then run with the following: streamlit run web_demo2.py --server.port 6006
|
RuntimeError: Library cudart is not initialized |
Install CUDA Toolkit |
streamlit based web_demo
|
ERROR: Ignored the following versions that require a different python version: 0.0.2.1 Requires-Python >=3.8; 0.0.2.2 Requires-Python >=3.8 |
|
TypeError: Protocols cannot be instantiated |
Hello, after replacing the fine-tuned model, an answer cannot be generated. At runtime, you will be prompted to 'You should potentially Train this model on a down stream task to be able to use it for predictions and preferences' May I ask how to correctly call the fine-tuned model? |
|
Hello,there is an "ucaught exception" happening. Protocols cannot be instantiated. Could you pls help me solve it ? 2023-05-05 07:20:53.674 Uncaught exception |
|
Hi, I've encountered the following error: SSLError: HTTPSConnectionPool(host='huggingface.co', port=443): Max retries exceeded with url: /THUDM/chatglm-6b/resolve/main/tokenizer_config.json (Caused by SSLError(SSLCertVerificationError(1, '[SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed: self signed certificate in certificate chain (_ssl.c:1007)'))) Adding os.environ['CURL_CA_BUNDLE'] = '' or response = requests.get('https://huggingface.co/THUDM/chatglm-6b/resolve/main/tokenizer_config.json', verify=False) couldn't help. Any help will be appreciated. |
|
can you write one more for ptuning? i donot know python @AdamBear |
|
Explicitly passing a During handling of the above exception, another exception occurred: Traceback (most recent call last): During handling of the above exception, another exception occurred: Traceback (most recent call last): python web_demo.py |
|
嗯嗯 |
|
OSError: We couldn't connect to 'https://huggingface.co' to load this file, couldn't find it in the cached files and it looks like THUDM/chatglm-6b is not the path to a directory containing a file named config.json. Checkout your internet connection or see how to run the library in offline mode at 'https://huggingface.co/docs/transformers/installation#offline-mode'. |
|
how can we streamlit run web_demo2.py with another localhost? |
Add a steamlit based demo web_demo2.py for better UI.
Need to install streamlit and streamlit-chat component fisrt:
then run with the following command:
streamlit run web_demo2.py --server.port 6006