Writing Code Documentation

Explore top LinkedIn content from expert professionals.

  • View profile for Vitaly Friedman
    Vitaly Friedman Vitaly Friedman is an Influencer

    Practical insights for better UX • Running “Measure UX” and “Design Patterns For AI” • Founder of SmashingMag • Speaker • Loves writing, checklists and running workshops on UX. 🍣

    222,776 followers

    🌎 Designing Cross-Cultural And Multi-Lingual UX. Guidelines on how to stress test our designs, how to define a localization strategy and how to deal with currencies, dates, word order, pluralization, colors and gender pronouns. ⦿ Translation: “We adapt our message to resonate in other markets”. ⦿ Localization: “We adapt user experience to local expectations”. ⦿ Internationalization: “We adapt our codebase to work in other markets”. ✅ English-language users make up about 26% of users. ✅ Top written languages: Chinese, Spanish, Arabic, Portuguese. ✅ Most users prefer content in their native language(s). ✅ French texts are on average 20% longer than English ones. ✅ Japanese texts are on average 30–60% shorter. 🚫 Flags aren’t languages: avoid them for language selection. 🚫 Language direction ≠ design direction (“F” vs. Zig-Zag pattern). 🚫 Not everybody has first/middle names: “Full name” is better. ✅ Always reserve at least 30% room for longer translations. ✅ Stress test your UI for translation with pseudolocalization. ✅ Plan for line wrap, truncation, very short and very long labels. ✅ Adjust numbers, dates, times, formats, units, addresses. ✅ Adjust currency, spelling, input masks, placeholders. ✅ Always conduct UX research with local users. When localizing an interface, we need to work beyond translation. We need to be respectful of cultural differences. E.g. in Arabic we would often need to increase the spacing between lines. For Chinese market, we need to increase the density of information. German sites require a vast amount of detail to communicate that a topic is well-thought-out. Stress test your design. Avoid assumptions. Work with local content designers. Spend time in the country to better understand the market. Have local help on the ground. And test repeatedly with local users as an ongoing part of the design process. You’ll be surprised by some findings, but you’ll also learn to adapt and scale to be effective — whatever market is going to come up next. Useful resources: UX Design Across Different Cultures, by Jenny Shen https://lnkd.in/eNiyVqiH UX Localization Handbook, by Phrase https://lnkd.in/eKN7usSA A Complete Guide To UX Localization, by Michal Kessel Shitrit 🎗️ https://lnkd.in/eaQJt-bU Designing Multi-Lingual UX, by yours truly https://lnkd.in/eR3GnwXQ Flags Are Not Languages, by James Offer https://lnkd.in/eaySNFGa IBM Globalization Checklists https://lnkd.in/ewNzysqv Books: ⦿ Cross-Cultural Design (https://lnkd.in/e8KswErf) by Senongo Akpem ⦿ The Culture Map (https://lnkd.in/edfyMqhN) by Erin Meyer ⦿ UX Writing & Microcopy (https://lnkd.in/e_ZFu374) by Kinneret Yifrah

  • View profile for Syeda Sumiha Jahan

    ISTQB® Certified(CTFL v4.0) |Software QA Engineer |Manual & Automation Testing |API Testing |Performance Testing| Database Testing|Web &Mobile App Testing|

    9,814 followers

    📚 Key Test Documentation Types 1. Test Plan Purpose: Outlines the overall strategy and scope of testing. Includes: Objectives Scope (in-scope and out-of-scope) Resources (testers, tools) Test environment Deliverables Risk and mitigation plan Example: "Regression testing will be performed on modules A and B by using manual TC" 2. Test Strategy Purpose: High-level document describing the overall test approach. Includes: Testing types (manual, automation, performance) Tools and technologies Entry/Exit criteria Defect management process 3. Test Scenario Purpose: Describes a high-level idea of what to test. Example: "Verify that a registered user can log in successfully." 4. Test Case Purpose: Detailed instructions for executing a test. Includes: Test Case ID Description Preconditions Test Steps Expected Results Actual Results Status (Pass/Fail) 5. Traceability Matrix (RTM) Purpose: Ensures every requirement is covered by test cases. Format: Requirement ID Requirement Description Test Case IDs REQ_001 Login functionality TC_001, TC_002 6. Test Data Purpose: Input data used for executing test cases. Example: Username: testuser, Password: Password123 7. Test Summary Report Purpose: Summary of all testing activities and outcomes. Includes: Total test cases executed Passed/Failed count Defects raised/resolved Testing coverage Final recommendation (Go/No-Go) 8. Defect/Bug Report Purpose: Details of defects found during testing. Includes: Bug ID Summary Severity / Priority Steps to Reproduce Status (Open, In Progress, Closed) Screenshots (optional) Here's a set of downloadable, editable templates for essential software testing documentation. These are useful for manual QA, automation testers, or even team leads preparing structured reports. 📄 1. Test Plan Template File Type: Excel / Word Key Sections: Project Overview Test Objectives Scope (In/Out) Resources & Roles Test Environment Schedule & Milestones Risks & Mitigation Entry/Exit Criteria 🔗 Download Test Plan Template (Google Docs) 📄 2. Test Case Template File Type: Excel Columns Included: Test Case ID Module Name Description Preconditions Test Steps Expected Result Actual Result Status (Pass/Fail) Comments 🔗 Download Test Case Template (Google Sheets) 📄 3. Requirement Traceability Matrix (RTM) File Type: Excel Key Fields: Requirement ID Requirement Description Test Case ID Status (Covered/Not Covered) 🔗 Download RTM Template (Google Sheets) 📄 4. Bug Report Template File Type: Excel Columns: Bug ID Summary Severity Priority Steps to Reproduce Actual vs. Expected Result Status Reported By 🔗 Download Bug Report Template (Google Sheets) 📄 5. Test Summary Report File Type: Word or Excel Includes: Project Name Total Test Cases Execution Status (Pass/Fail) Bug Summary Test Coverage Final Remarks / Sign-off 🔗 Download Test Summary Template (Google Docs) #QA

  • View profile for Leigh-Anne Wells

    Founder, Firecrab | Technical Content Strategist for AI-Savvy Brands | Human-First Writing in an AI-Saturated World

    2,164 followers

    Tech writers don’t write. → Not in the way most people think. We don’t sit down with a blank page and “make it up.” We’re not wordsmiths polishing clever sentences. We’re not decorators. We’re architects. And in the age of AI, our role has quietly evolved into something far more powerful—and far more essential. Here’s what the new tech writer actually does: 1.⁠ ⁠We curate. We filter the noise. From dev notes, internal wikis, messy Notion pages, AI-generated drafts—we gather what matters and discard what doesn’t. 2.⁠ ⁠We verify. We don’t just copy and paste. We check, clarify, recheck. Because what’s written in the spec doc isn’t always what’s true in production. 3.⁠ ⁠We restructure. We’re not just editing for grammar. We’re rearchitecting information to match how real users actually read and retain it. Good docs don’t just inform. They guide. 4.⁠ ⁠We translate. We bridge the gap between engineering and end user. Between product complexity and business clarity. Between AI output and human understanding. 5.⁠ ⁠We strategize. We don’t “just write the docs.” We shape documentation ecosystems—mapping user journeys, designing content models, identifying gaps before they become support tickets. If you’re hiring a writer to “clean up” your AI-generated documentation, you’re looking for the wrong skillset. You don’t need a cleaner. You need an operator. One who understands: • How your product works • What your users need • What your GTM team is saying • What your AI tools are missing • And how to bring it all together—seamlessly Because in 2025, tech writers aren’t just writers. We’re content strategists with dev-level instincts. And the companies that understand this? They’re the ones whose products get adopted faster, retained longer, and supported less.

  • View profile for Tousif Hujare

    Lead Business Analyst @Birlasoft

    5,587 followers

    𝟭. 𝗕𝘂𝘀𝗶𝗻𝗲𝘀𝘀 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 (𝗕𝗥𝗗): A BRD captures high-level business needs and objectives from a stakeholder’s perspective. It focuses on why a project is being undertaken and what value it brings to the business. 𝗞𝗲𝘆 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀: • Business objectives • Stakeholder needs • High-level business requirements • Scope of the project • Business rules • Assumptions and constraints 𝟮. 𝗙𝘂𝗻𝗰𝘁𝗶𝗼𝗻𝗮𝗹 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗗𝗼𝗰𝘂𝗺𝗲𝗻𝘁 (𝗙𝗥𝗗): An FRD translates high-level business needs into detailed functional requirements that describe how a system should behave. It focuses on system interactions, workflows, and features that will fulfill business requirements. 𝗞𝗲𝘆 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀: • Functional requirements (detailed descriptions of features) • System workflows • Use cases and user stories • UI/UX requirements (screens, wireframes) • Data flow diagrams 𝟯. 𝗦𝗼𝗳𝘁𝘄𝗮𝗿𝗲 𝗥𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀 𝗦𝗽𝗲𝗰𝗶𝗳𝗶𝗰𝗮𝘁𝗶𝗼𝗻 (𝗦𝗥𝗦): An SRS is a comprehensive document that includes both functional and non-functional requirements, providing a complete specification of how the software should work. It is often used by developers and testers for system implementation. 𝗞𝗲𝘆 𝗘𝗹𝗲𝗺𝗲𝗻𝘁𝘀: • Functional requirements (features & capabilities) • Non-functional requirements (performance, security, scalability) • System architecture & design constraints • Data models • Interfaces (API, external system interactions) While the 𝗕𝗥𝗗, 𝗙𝗥𝗗, and 𝗦𝗥𝗦 serve different purposes, they all contribute to 𝗰𝗹𝗲𝗮𝗿 𝗮𝗻𝗱 𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲𝗱 𝗿𝗲𝗾𝘂𝗶𝗿𝗲𝗺𝗲𝗻𝘁𝘀. In 𝗔𝗴𝗶𝗹𝗲 𝗲𝗻𝘃𝗶𝗿𝗼𝗻𝗺𝗲𝗻𝘁𝘀, these documents may be replaced with 𝗣𝗿𝗼𝗱𝘂𝗰𝘁 𝗕𝗮𝗰𝗸𝗹𝗼𝗴𝘀, 𝗨𝘀𝗲𝗿 𝗦𝘁𝗼𝗿𝗶𝗲𝘀, 𝗮𝗻𝗱 𝗘𝗽𝗶𝗰𝘀, but in 𝗪𝗮𝘁𝗲𝗿𝗳𝗮𝗹𝗹 𝗼𝗿 𝗵𝘆𝗯𝗿𝗶𝗱 𝗺𝗼𝗱𝗲𝗹𝘀, they are still widely used. Which of these documents do you use in your projects? Let’s discuss in the comments! 👇 #BusinessAnalysis #IIBA #BRD #FRD #SRS #RequirementsEngineering #SoftwareDevelopment

  • View profile for Rocky Bhatia

    400K+ Engineers | Architect @ Adobe | GenAI & Systems at Scale

    208,720 followers

    Demystifying CI/CD Pipelines: A Simple Guide for Easy Understanding 1. Code Changes:   Developers make changes to the codebase to introduce new features, bug fixes, or improvements. 2. Code Repository:   The modified code is pushed to a version control system (e.g., Git). This triggers the CI/CD pipeline to start. 3. Build:   The CI server pulls the latest code from the repository and initiates the build process.   Compilation, dependency resolution, and other build tasks are performed to create executable artifacts. 4. Predeployment Testing:   Automated tests (unit tests, integration tests, etc.) are executed to ensure that the changes haven't introduced errors.   This phase also includes static code analysis to check for coding standards and potential issues. 5. Staging Environment:   If the pre deployment tests pass, the artifacts are deployed to a staging environment that closely resembles the production environment. 6. Staging Tests:   Additional tests, specific to the staging environment, are conducted to validate the behavior of the application in an environment that mirrors production. 7. Approval/Gate:   In some cases, a manual approval step or a set of gates may be included, requiring human intervention or meeting specific criteria before proceeding to the next stage. 8. Deployment to Production:   If all tests pass and any necessary approvals are obtained, the artifacts are deployed to the production environment. 9. Post deployment Testing    After deployment to production, additional tests may be performed to ensure the application's stability and performance in the live environment. 10. Monitoring:    Continuous monitoring tools are employed to track the application's performance, detect potential issues, and gather insights into user behaviour. 11. Rollback (If Necessary):    If issues are detected post deployment, the CI/CD pipeline may support an automatic or manual rollback to a previous version. 12. Notification:    The CI/CD pipeline notifies relevant stakeholders about the success or failure of the deployment, providing transparency and accountability. This iterative and automated process ensures that changes to the codebase can be quickly and reliably delivered to production, promoting a more efficient and consistent software delivery lifecycle. It also helps in catching potential issues early in the development process, reducing the risk associated with deploying changes to production.

  • View profile for Brij kishore Pandey
    Brij kishore Pandey Brij kishore Pandey is an Influencer

    AI Architect | AI Engineer | Generative AI | Agentic AI

    709,423 followers

    As APIs form the backbone of modern software architecture, I wanted to share this comprehensive REST API cheatsheet that covers crucial implementation aspects: 1. Core Architectural Principles: - Client-Server separation ensures scalability and independent evolution - Statelessness eliminates server-side session storage - Cacheability improves performance and reduces server load - Layered System architecture enables middleware and security layers - Code on Demand provides flexibility for client-side execution - Uniform Interface standardizes client-server communication 2. HTTP Methods Demystified: GET: Retrieve data (Read) POST: Create new resources PUT: Complete resource update PATCH: Partial resource modification DELETE: Remove resources HEAD: Fetch headers only OPTIONS: Check available operations 3. Status Code Categories: 2xx: Success (200 OK, 201 Created) 3xx: Redirection (301 Moved Permanently) 4xx: Client Errors (401 Unauthorized, 404 Not Found) 5xx: Server Errors (500 Internal Server Error) 4. Security Implementation: - OAuth 2.0/JWT for robust authentication - Role-based (RBAC) authorization - TLS/SSL encryption - Input validation - Rate limiting - CORS configuration - Security headers (CSP, X-Frame-Options) 5. Resource Naming Best Practices: - Noun-based endpoints (/users, /products) - Plural resources for collections - Hyphenated compound words - Lowercase for consistency 6. Production-Ready Features: - API versioning in URLs - Query parameter filtering - Resource sorting capabilities - Pagination for large datasets - Comprehensive error handling - OpenAPI documentation - Efficient caching strategies What other critical aspects do you consider when designing REST APIs?

  • View profile for Sandip Das

    Senior Cloud, DevOps & MLOps Engineer | AWS, Kubernetes (EKS), Terraform, CI/CD | AI Application Developer & Platform Modernization Engineer | AWS Container Hero

    114,326 followers

    It took me some extra hours in late night, but here you go. I have simplified an Ideal GitHub Actions Flow for you. 👇 1) 🧭 Triggers: 🧲 GitHub Event fires → Can be a push, PR, manual dispatch, or a scheduled trigger. 📜 Workflow file executes → GitHub reads the YAML config and starts the pipeline. 🔁 Workflow Trigger hits the CI Phase → We now jump into the first main section: CI. 2) 🔧 CI Phase: 📋 Lint & Validate → Checks formatting and file syntax — like YAML, Dockerfiles, Terraform, etc. 🏗️ Build Artifacts → Your app gets compiled or packaged (Docker images, binaries, etc). 🧬 Unit Tests → Quick tests that verify individual components or logic. 🧪 Integration Tests → Validates if your services/modules interact correctly. 📊 Code Coverage → Checks how much of your code is covered by tests — helps improve test quality. 🔒 Security Scanning → Tools like CodeQL or Trivy catch vulnerabilities early. 3) 🧮 Matrix + CI Result Evaluation 🧮 Matrix Execution → Parallel jobs (across OS versions, Python/Node versions, etc). ✅ CI Results → Only proceed if everything passes — block if even one test fails. 4) 🚀 CD Phase (Continuous Deployment) 🚀 CD Phase starts → If CI is clean, we move toward releasing. 🧪 Deploy to Staging → Ship to a safe sandbox environment that mirrors production. 🔥 Smoke Tests in Staging → High-level sanity checks (e.g., “Does the login page load?”). 🛑 Approval Required → Human checkpoint — usually from senior engineer or release manager. ✅ Approval Granted → Deploy to Production → This is your official go-live moment. 🔍 Post-Deployment Tests → Sanity and health checks to ensure production is stable. 5) ♻️ Ops, Rollbacks, and Notifications 🔁 Rollback Plan (if needed) → If post-deploy tests fail, we roll back to the last good version. 📣 Notify Engineers → DevOps team gets pinged (Slack, Teams, PagerDuty, etc). 📡 Monitoring & Logging → Live dashboards, alerts, and logs keep watch over the system. 6) ✅ Final Status Updates 🟢 Update Status Badge → Those fancy CI badges on your README get updated. 📌 GitHub Repository Status reflects build/deploy result → Shows up directly on your pull request for reviewers. Get started with GitHub Actions Hands-on way: https://lnkd.in/gcReECUU Consider ♻️ reposting if you have found this useful. Cheers, Sandip Das

  • View profile for Dr Bart Jaworski

    Become a great Product Manager with me: Product expert, content creator, author, mentor, and instructor

    135,271 followers

    Most companies don't have an API problem. They have an API discovery problem. How to address it? Your APIs already run on AWS, Azure, or other gateways. They work fine. The real challenge? Nobody can find them, understand them, or adopt them easily. Every API integration requires multiple calls and months of dev work. Here's what typically happens: • APIs scattered across Postman, GitHub, and multiple gateways • Documentation is outdated or buried in Confluence • Internal teams asking, "Wait, do we have an API for that?" • Potential partners are unable to onboard themselves • Compliance and governance nightmares    Sound familiar? This is where a proper developer portal changes everything. Not another gateway. Not more infrastructure. Just one unified portal where all your APIs live, are documented, and ready to use. This is exactly what Digitalapi.ai, partner of this post, does: 1) Auto-discovery across your entire stack Connect your AWS gateways, Postman workspaces, and GitHub repos. AI automatically finds, catalogs, and documents every API. No manual work needed. 2) AI-powered documentation that never gets stale Every endpoint update is instantly reflected in your docs. Internal teams and external partners always see the current state, eliminating the number 1 reason integrations fail. 3) Built-in governance and compliance Automatic checks ensure your APIs meet security standards and compliance requirements. No more manual audits or spreadsheet tracking. You know something is wrong the moment an issue is introduced. 4) Branded portal for 3rd party adoption Open your APIs to external developers through a professional, branded portal. They can discover, test, and integrate, all self-service. That means so many fewer calls! 5) Monetization built in Turn API access into revenue with subscription tiers, usage-based pricing, and automated billing. Your APIs become a business channel, not just a technical feature. Just like it always should have been. The result? • Internal teams find and use existing APIs instead of rebuilding them • Partners onboard themselves without bothering your engineering team • New revenue streams from API subscriptions • Faster integrations = faster partnerships = faster growth Your API already exists. Make it discoverable, governable, and monetizable. Check out http://www.DigitalAPI.ai and see how a proper dev portal transforms scattered APIs into a growth engine. Did you ever struggle with an API integration? Let me know in the comments :) #productmanagement #api #apistrategy

  • View profile for Charu Mitra Dubey

    Marketing @ Stello AI | Product + Content Marketing | B2B SaaS Writer & Consultant | Words in Entrepreneur, Sprout Social, Buffer | National Level Awardee “ Marketing” | Founder @ CopyStash @TIP 💜

    45,154 followers

    I’ve written over 700+ blogs — and most of them were for B2B SaaS companies. And if there’s one thing I’ve learned after years of doing this, it’s this: 👉 Most blog posts don’t fail because the writing is bad. 👉 They fail because the thinking behind them is shallow. Writers jump straight into the doc. They focus on keywords instead of intent. They publish something that looks “good” but it’s easily replaceable and forgotten in a week. The truth is, good content answers a question. But great content solves a problem completely. And that shift happens before you write a single word. Here’s the 3-step framework I use before I start writing — one that’s helped my content consistently rank, convert, and actually matter 👇 1. Understand search intent and validate it with SERP analysis The keyword is just the entry point. What matters is the real problem behind it. If someone searches for “email automation tools,” they’re not just collecting tool names. They might be: - Comparing features before they buy - Looking for beginner-friendly options - Trying to automate a specific workflow - Checking pricing and ROI This is why SERP analysis is crucial Before I write, I study the top 5-10 results to understand: 👉 What content format is ranking (listicles, tutorials, comparisons) 👉 What angle competitors are using (pricing, features, industry-specific) 👉 How deep they go (surface-level vs. in-depth) 👉 What’s missing (use cases, FAQs, reviews, decision checklists) This tells you what Google rewards and what the audience expects — so you can deliver both. 2. Build a structure that turns your post into a resource Most blog posts are just paragraphs stitched together. But the content that ranks and converts is structured intentionally to solve problems. Here’s what I include in almost every piece: ✅ Comparison tables – help readers make decisions faster ✅ FAQs – capture long-tail questions and PAA queries ✅ Use cases – make context and applicability clear ✅ User reviews/testimonials – add credibility and trust ✅ Decision checklists – guide readers toward next steps When you do this, your article stops being “content” — it becomes a solution. And solutions are what Google surfaces and readers save. 3. Add strategic depth — something no AI or competitor can replicate Even if you nail intent and structure, your piece will blend in if it doesn’t bring something original. This is where you inject your experience and perspective: 👉 A unique POV (“We tested 8 tools — here’s what actually mattered”) 👉 A new angle (“Best automation tools ranked by ROI, not features”) 👉 A bonus insight (“3 workflows you can automate in 10 minutes”) This is the difference between being informative and being unforgettable. TL;DR ✔️ Understand the real intent — and validate it through SERP analysis. ✔️ Design a structure that solves the problem completely. ✔️ Add depth that only your perspective can provide.

  • View profile for Rony Rozen
    Rony Rozen Rony Rozen is an Influencer

    Senior TPM @ Google | Stop Helping. Start Owning. | Turning Invisible Work into Strategic Impact | AI & Tech Leadership

    13,905 followers

    Speaking Tech and Human: Why Every Team Needs a Communication Chameleon Ever been in a meeting where it feels like everyone's speaking a different language? Not in the literal sense, but in that "tech jargon vs. human speak" kind of way. It happens all the time, especially in cross-functional teams. Engineers, with our love of acronyms and complex terminology, can sometimes leave non-technical folks feeling lost in the weeds. I recently witnessed this firsthand. Picture a late-night meeting about an upcoming AI launch. The tension is high, the deadline is looming, and suddenly, someone asks a seemingly simple question: "So, what exactly is an IDE?" The engineer on the call launches into a detailed explanation, complete with references to command-line interfaces. It's like trying to explain astrophysics to someone who just learned the alphabet. This is where we TPMs (or anyone with a knack for both tech and "human speak") come in. We're the interpreters, the bridge-builders, ensuring everyone's on the same page. In that late-night meeting, I jumped in with a simple explanation: "An IDE is basically the tool where developers write and test their code. It's like a word processor for software." Problem solved! The question-asker got the gist, the engineer learned a valuable lesson about audience-focused communication, and we all got a little closer to hitting that launch button. Key takeaways for clearer tech communication: - Know your audience: Tailor your explanations to the listener's technical understanding. - Focus on the "why": Explain the impact and benefits, not just the technical details. - Keep it simple: Avoid jargon and acronyms whenever possible. - Use analogies (when appropriate): Relate complex concepts to everyday experiences. Effective communication isn't about showing off your technical expertise, it's about building a shared understanding and achieving goals together. And in a world where tech is increasingly intertwined with every aspect of our lives, the ability to translate "tech-speak" into "human-speak" is more important than ever. Have you ever witnessed a "lost in translation" moment in tech? Share your stories in the comments! 👇 #TPMlife #TechLeadership #Google #LifeAtGoogle

Explore categories