Hello OpenClaw Security Team,
My recently published skill azure-bing-grounding has been automatically flagged as suspicious on ClawHub. I believe this is a false positive.
Why it was likely flagged:
The underlying python script (scripts/bing_grounding.py) manually reads the ~/.openclaw/.env file to load Azure credentials (AZURE_CLIENT_ID, AZURE_CLIENT_SECRET, AZURE_TENANT_ID) and subsequently uses them with the official azure-identity and azure-ai-agents Python SDKs.
Why it is safe:
- The script only communicates with the user-provided
FOUNDRY_PROJECT_ENDPOINT (an official Azure domain).
- It does not exfiltrate any environment variables or credentials to third-party or arbitrary endpoints.
- The credential loading logic is strictly used to instantiate Azure's
ClientSecretCredential.
Could you please review the source code on ClawHub (or the linked GitHub repo: BoWang306/azure-bing-grounding-skill) and remove the security flag?
Thank you!
Hello OpenClaw Security Team,
My recently published skill
azure-bing-groundinghas been automatically flagged as suspicious on ClawHub. I believe this is a false positive.Why it was likely flagged:
The underlying python script (
scripts/bing_grounding.py) manually reads the~/.openclaw/.envfile to load Azure credentials (AZURE_CLIENT_ID,AZURE_CLIENT_SECRET,AZURE_TENANT_ID) and subsequently uses them with the officialazure-identityandazure-ai-agentsPython SDKs.Why it is safe:
FOUNDRY_PROJECT_ENDPOINT(an official Azure domain).ClientSecretCredential.Could you please review the source code on ClawHub (or the linked GitHub repo: BoWang306/azure-bing-grounding-skill) and remove the security flag?
Thank you!