-
Notifications
You must be signed in to change notification settings - Fork 4.1k
docs: Adds separate indexable FAQs in docs #7139
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
AI Code Reviewundefined No specific line comments generated. |
Code Review Summary✅ Strengths
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR refactors the troubleshooting documentation by moving FAQ content to a dedicated faqs.mdx file and adding comprehensive redirect mappings. The changes improve documentation organization but have several issues with formatting, broken links, and redirect configuration that need to be addressed.
Code Review Summary✅ Strengths
|
* docs: add comprehensive Ollama troubleshooting guide - Add solutions for local connection issues - Document remote Ollama configuration steps - Include WSL-specific networking fixes - Add Docker container connectivity solutions - Cover parse error troubleshooting Addresses common Ollama integration problems users face when setting up Continue with local, remote, WSL, and containerized Ollama instances. * docs: add local assistant secrets management section to FAQs - Added comprehensive documentation for managing local secrets and environment variables - Included multiple methods for configuring .env files (workspace, global, process) - Added examples showing how to use secrets in config.yaml - Explained difference between local secrets and Hub-managed secrets - Added troubleshooting tips for common secret configuration issues - Added link to Ollama guide for better discoverability - Added link to offline usage guide for users without internet access * docs: add model addons usage documentation for local assistants - Document how to use hub model addons in local assistant configs - Show examples of using 'uses:' syntax with provider/model-name format - Explain how to combine hub addons with local model configurations - Include override example for customizing addon settings locally - Add requirements section noting login and internet connection needed * fix: resolve broken image links in documentation - Fixed broken link to plan mode selector image (use plan-mode-selector.png) - Fixed broken link to assistant extension selector image - Removed broken link to non-existent /features/plan/how-to-customize page
|
🎉 This PR is included in version 1.6.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
|
🎉 This PR is included in version 1.9.0 🎉 The release is available on: Your semantic-release bot 📦🚀 |
Description
[ What changed? Feel free to be brief. ]
While exploring an answer I could not easily find it in troubleshooting. I also know /faqs are performing well in LLM search, so I made a change to move FAQs to its own sections for discoverability and better linking.
https://continue-docs-bdougie-faqs.mintlify.app/faqs
AI Code Review
@continue-general-reviewor@continue-detailed-reviewChecklist
Screen recording or screenshot
[ When applicable, please include a short screen recording or screenshot - this makes it much easier for us as contributors to review and understand your changes. See this PR as a good example. ]
Tests
[ What tests were added or updated to ensure the changes work as expected? ]
https://continue-docs-bdougie-faqs.mintlify.app/faqs
Summary by cubic
Moved FAQs to a separate, indexable section in the docs to make answers easier to find and link to.