[VertexAIChatCompletion ](https://github.com/microsoft/semantic-kernel/blob/main/python/semantic_kernel/connectors/ai/google/vertex_ai/services/vertex_ai_chat_completion.py)class is not providing support for [async_client](https://github.com/microsoft/semantic-kernel/blob/b1ecee2d0e513963a3aef99bbb85f7d7bce816aa/python/semantic_kernel/connectors/ai/open_ai/services/azure_chat_completion.py#L55). Will it come in the future? Is there any alternative to be able to intercept HTTP request/response to the LLMs when using [VertexAIChatCompletion](https://github.com/microsoft/semantic-kernel/blob/main/python/semantic_kernel/connectors/ai/google/vertex_ai/services/vertex_ai_chat_completion.py)?