Cloud Application Deployment

Explore top LinkedIn content from expert professionals.

  • View profile for Pau Labarta Bajo

    Building and teaching AI that works > Maths Olympian> Father of 1.. sorry 2 kids

    69,362 followers

    Let's 𝗱𝗲𝗽𝗹𝗼𝘆 a REST API to 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 🐳 Step by step ↓ 𝗧𝗵𝗲 𝗽𝗿𝗼𝗯𝗹𝗲𝗺 🤔 Two weeks ago we built a REST API to serve historical data on taxi rides in NYC. And last week we wrote a professional Dockerfile to package it inside a Docker image. The API works like a charm on our laptop, but the thing is, until you don’t deploy it to a production environment, and make it accessible to > your clients 💁🏻♀️ > your colleagues 👨🏻💼 > or even the whole world 🌏 your real-world impact is ZERO. So today, I want to show you how to deploy this API to a Kubernetes cluster. 𝗪𝗵𝗮𝘁 𝗶𝘀 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀? ☸📦 Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of your Dockerized apps. Kubernetes is a powerful beast. However, it has also one BIG problem.. 𝗧𝗵𝗲 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗬𝗔𝗠𝗟 𝗛𝗘𝗟𝗟 🔥 The Kubernetes configuration file required to deploy even the simplest service is > very verbose, > error-prone and > excessively complex. which adds too much friction (and frustration!) in your deployment process. So the question is >> 𝗜𝘀 𝗶𝘁 𝗽𝗼𝘀𝘀𝗶𝗯𝗹𝗲 𝘁𝗼 𝗱𝗲𝗽𝗹𝗼𝘆 𝘁𝗼 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀, 𝘄𝗶𝘁𝗵𝗼𝘂𝘁 𝗴𝗼𝗶𝗻𝗴 𝘁𝗵𝗿𝗼𝘂𝗴𝗵 𝗬𝗔𝗠𝗟 𝗵𝗲𝗹𝗹? Yes, it is! ⬇️ 𝗚𝗶𝗺𝗹𝗲𝘁 𝘁𝗼 𝘁𝗵𝗲 𝗿𝗲𝘀𝗰𝘂𝗲 🦸🏻 Gimlet ↳🔗 https://gimlet.io/ is a tool running inside your Kubernetes cluster that helps you quickly deploy your apps. Let's start with a manual deployment: 𝗠𝗮𝗻𝘂𝗮𝗹 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁 🔧 These are the steps to create a new deployment with the Gimlet UI > 𝗜𝗺𝗽𝗼𝗿𝘁 your github repository. > Choose to manually deploy from a 𝗗𝗼𝗰𝗸𝗲𝗿𝗳𝗶𝗹𝗲 > Pick your 𝗗𝗼𝗰𝗸𝗲𝗿 𝗿𝗲𝗴𝗶𝘀𝘁𝗿𝘆, and > Set the 𝗽𝗼𝗿𝘁 𝗻𝘂𝗺𝗯𝗲𝗿 your API is listening to. BOOM! Your API is now running in Kubernetes. Let's go one step further... 𝗔𝘂𝘁𝗼𝗺𝗮𝘁𝗶𝗰 𝗱𝗲𝗽𝗹𝗼𝘆𝗺𝗲𝗻𝘁𝘀 𝘄𝗶𝘁𝗵 𝗖𝗜/𝗖𝗗 ⚙️ The Gimlet Github action helps you automatically deploy your code changes to your Kubernetes cluster. For example, you can add a github workflow that > 𝗧𝗿𝗶𝗴𝗴𝗲𝗿𝘀 after every push to main branch > 𝗧𝗲𝘀𝘁𝘀 your code > 𝗕𝘂𝗶𝗹𝗱𝘀 and pushes the Docker image to your Docker registry (in this case, I use mine from Github), and > 𝗗𝗲𝗽𝗹𝗼𝘆𝘀 it to Kubernetes using the Gimlet action. Continuous delivery made simple! 𝗙𝘂𝗹𝗹 𝘀𝗼𝘂𝗿𝗰𝗲 𝗰𝗼𝗱𝗲 👨💻 Below you will find a link to the Github repository will all the code ⬇️ ---- Hi there! It's Pau Labarta Bajo 👋 Every day I share free, hands-on content, on production-grade ML, to help you build real-world ML products. 𝗙𝗼𝗹𝗹𝗼𝘄 𝗺𝗲 and 𝗰𝗹𝗶𝗰𝗸 𝗼𝗻 𝘁𝗵𝗲 🔔 so you don't miss what's coming next #machinelearning #docker #kubernetes #mlops #realworldml

  • View profile for Deepak Agrawal

    Founder & CEO @ Infra360 | DevOps, FinOps & CloudOps Partner for FinTech, SaaS & Enterprises

    14,782 followers

    Last month, our team lost 40+ hours fighting 7 Kubernetes issues that looked like “nothing.” Nobody talks about the actual stuff that slows teams down in production. It's not the outages. It's the invisible crap that doesn’t show up in dashboards or status checks. Here’s exactly what hit us and what finally worked: 1. “Deployment succeeded”… but nothing worked. Everything is green. Service unreachable. Turns out: readiness probes were wrong for apps with long cold starts. ✓ 𝐇𝐨𝐰 𝐰𝐞 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭: Delayed probe kicks + timeout aligned with real boot time. Worked instantly. 2. CPU throttling was massive. Nobody knew. Why? Because average CPU usage looked fine. But container_cpu_cfs_throttled_seconds_total told a different story. ✓ 𝐇𝐨𝐰 𝐰𝐞 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭: Alert on throttle %, not just usage. Our dashboards were lying to us. 3. One container restarted 98 times. Nobody caught it. CrashLoopBackOff silently chewing up restart credits. ✓ 𝐇𝐨𝐰 𝐰𝐞 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭: Killed auto-retries. Set a hard alert on restart count > 3 in 10 mins. If it fails 3x, it’s not a blip. It’s broken. 4. Services were randomly failing to resolve each other. Classic: flaky DNS under load. ✓ 𝐇𝐨𝐰 𝐰𝐞 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭: Switched to CoreDNS, added autoscaler, bumped memory limits. No drama since. 5. PVCs stuck in “Terminating” forever. Volumes wouldn’t detach. Finalizers misbehaving. ✓ 𝐇𝐨𝐰 𝐰𝐞 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭: Manual patch jobs to nuke stuck finalizers. Now part of our cleanup cron. 6. Cluster Autoscaler was too efficient. Scaled down mid-job. Pods got killed with 0 warning. ✓ 𝐇𝐨𝐰 𝐰𝐞 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭: Split node pools by workload type. Critical stuff stays on on-demand. Spot nodes only run retryable junk. 7. Completed jobs were hogging resources. They finished… but hung around forever. ✓ 𝐇𝐨𝐰 𝐰𝐞 𝐟𝐢𝐱𝐞𝐝 𝐢𝐭: TTL Controller + dynamic GC rules based on job type. The common theme? None of this showed up in our standard monitoring. We had to dig. Ask dumber questions. Read the logs everyone ignores. If you’ve ever lost a week to stuff like this, what tripped you up? ♻️ 𝐑𝐄𝐏𝐎𝐒𝐓 𝐒𝐨 𝐎𝐭𝐡𝐞𝐫𝐬 𝐂𝐚𝐧 𝐋𝐞𝐚𝐫𝐧.

  • View profile for Damien B.

    Senior Cloud Security Engineer • LinkedIn Learning Instructor, Speaker, Content Creator • AWS Community Builder • Mentor & Advocate

    10,149 followers

    What’s going on, y'all! 👋 I’m excited to announce that the documentation supporting the video I released with the Cloud Security Podcast — "How To Setup A DevSecOps Pipeline for Amazon EKS with Terraform" — has been released! 🎊 🥳 You can check out the full docs on The DevSec Blueprint (DSB) in the Projects section here: https://lnkd.in/gq-t8hSG Here’s a quick rundown of what you can learn below: ✅ Secure CI/CD Architecture: Combine AWS CodePipeline, CodeBuild, S3, SSM Parameter Store, and EKS for a seamless, end-to-end workflow. ✅ Integrated Security Scanning: Embed Snyk and Trivy checks directly into your pipeline to catch vulnerabilities before production. ✅ Infrastructure as Code: Leverage Terraform for consistent, scalable provisioning and easier infrastructure management. ✅ Containerized Deployments with EKS: Gain confidence deploying Kubernetes workloads to EKS, ensuring effortless scaling and orchestration. ✅ Proper Secrets Management: Use AWS Systems Manager Parameter Store to securely handle sensitive data, following best practices every step of the way. Check it out if you're looking to build cloud-native DevSecOps pipelines within AWS!

  • View profile for Hrittik Roy

    Platform Advocate at vCluster | CNCF Ambassador | Google Venkat Scholar | CKA, KCNA, PCA | Gold Microsoft LSA | GitHub Campus Expert 🚩| 4X Azure | LIFT Scholar '21|

    11,195 followers

    Learn how to set up event-driven autoscaling in Kubernetes using KEDA’s CPU trigger with a GKE Cluster! In this step-by-step tutorial, you’ll: - Configure KEDA to scale your deployments based on CPU usage - Create a Kubernetes deployment and expose it with a Load Balancer - Define a ScaledObject to connect KEDA with your deployment for CPU-based scaling - Simulate traffic to your application and watch KEDA automatically scale your pods in real time Perfect for anyone looking to master Kubernetes autoscaling beyond the basics. Watch to the end for a live demonstration of KEDA in action! YouTube URL: https://lnkd.in/gxPMvfnh

  • View profile for Kirsch Mackey

    Tech Writer, Blogger & YouTuber | Technical Marketing & Content Strategist | Altium • Cadence • Siemens • Airia AI | EE Workflow / ECAD / AI Productivity

    13,417 followers

    ⚡ PCB Design Fundamentals Day 3: Component Selection Strategy Component selection is about electronics specifications AND about future-proofing your design. Three considerations that prevent costly revisions: 1. Availability across multiple vendors (single-source components are design time bombs) 2. Thermal characteristics under worst-case conditions 3. Common footprint alternatives when primary choice becomes unavailable I learned this lesson painfully when a specialized connector which suddenly got bought out the next day and lead time was 26 weeks. This was mid-production of our boards. This forced me to select a surface-mount version for that part that only that manufacturer made and to change the footprint, resend all boards and documentation to the manufacturer, and so on. The result was a 3-week wait...and risked millions in funding, pushing things too close to the deadline. Now whenever I have an ECAD tool, I look for how well it helps me with BOM risk management, with pre-vetted alternatives for critical parts. Some software have their own tools for this, like the Altium BOM Management tool in Altium. What's your strategy for managing component supply chain risks? #ComponentSelection #SupplyChainStrategy #DesignResilience

  • View profile for Ravishanka Fonseka

    Systems Engineer @ MillenniumIT ESP | B.Sc. in Information Technology | 4K+ Family | 📧 4x Microsoft

    4,466 followers

    🚨 Microsoft has officially deprecated the Azure AD and MSOnline PowerShell modules as of March 30, 2024. While these modules will continue to function until March 30, 2025, support is now limited to migration assistance and security fixes. 🔹 Key dates to remember: ✔ June 30, 2024 – MSOnline versions 1.0.x may experience disruptions ✔ March 30, 2025 – Full deprecation of both modules 📢 What’s next? Microsoft recommends migrating to Microsoft Graph PowerShell SDK, which provides enhanced security, modern authentication, and broader capabilities for managing Microsoft Entra ID (formerly Azure AD). 💡 If you're still relying on AzureAD or MSOnline modules, now is the time to plan your migration! Check out the Migration FAQ and start transitioning your scripts to Graph PowerShell. 🔗 Learn more: [https://lnkd.in/gNyQ2JvY] #AzureAD #MicrosoftGraph #PowerShell #EntraID #MSOnline #ITAdmins #CloudManagement

  • View profile for Dilawar Javaid

    AWS × 3 certified| Helping Brands Turn Traffic into Revenue | CLOUD ENGINEERING| DATA ENGINEERING | Python Django| Fullstack web | Wordpress| Next.js| Three.js | React. js| Node.js|Kubernetes|Terraform| WebGl

    9,506 followers

    🚨 Struggling with Kubernetes Deployments? Let’s Break Down How to Debug Like a Pro! 🔍 Kubernetes is a game-changer for container orchestration, but let’s be honest—Kubernetes deployments don’t always go smoothly. 😬 We’ve all faced those frustrating moments when something just doesn’t work as expected. But fear not! Debugging Kubernetes can be your secret weapon to fixing deployment issues and taking your skills to the next level. 🚀 Here’s how you can debug a Kubernetes deployment like a pro and turn those headaches into solutions! Step 1: Check the Pods’ Status 1️⃣ What to Do: 🟢 Use the kubectl get pods command to check the status of your pods. 🟢 Look for pods that are stuck in a "Pending" or "CrashLoopBackOff" state. 1️⃣ Why It Matters: 🟢 This is your first indication of what’s going wrong. If your pods aren’t starting properly, there’s a deeper issue to tackle. Step 2: Inspect Pod Logs 2️⃣ What to Do: 🟡 Run kubectl logs <pod-name> to retrieve logs from a specific pod. 🟡 If your container is crashing, these logs are crucial for identifying the root cause. 2️⃣ Why It Matters: 🟡 Logs give you detailed insights into what's happening inside the pod—whether it's a misconfiguration, missing environment variables, or something else. Step 3: Describe the Deployment 3️⃣ What to Do: 🔵 Use kubectl describe deployment <deployment-name> to get a detailed breakdown of the deployment, including events, pod scheduling issues, and resource constraints. 3️⃣ Why It Matters: 🔵 This command helps you spot potential issues with node scheduling, resource limits, or even image pull errors. It’s the full story of your deployment’s health! Step 4: Check for Resource Limitations 4️⃣ What to Do: 🔴 Look for resource issues with kubectl describe node <node-name>. Check if your pods have enough memory and CPU to run properly. 4️⃣ Why It Matters: 🔴 Many deployment failures come down to insufficient resources. Scaling your resources or adjusting your pod limits might be all you need to fix the problem! Step 5: Review ConfigMaps and Secrets 5️⃣ What to Do: 🟠 Check if your deployment is correctly loading ConfigMaps and Secrets. Use kubectl get configmap and kubectl get secret to ensure they are properly mounted. 5️⃣ Why It Matters: 🟠 Misconfigured environment variables, credentials, or missing files can cause containers to fail unexpectedly. This step helps you ensure the right settings are in place. Step 6: Network Connectivity 6️⃣ What to Do: 🟣 Use kubectl exec -it <pod-name> -- /bin/bash to enter a pod’s shell and troubleshoot network connectivity with tools like curl or ping. 6️⃣ Why It Matters: 🟣 If your pods can’t communicate with each other or external services, the entire deployment can break. Ensuring connectivity is critical for debugging. 💬 Join the Discussion #Kubernetes #DevOps #CloudComputing #Containerization #K8s #Debugging #TechTips #DigitalTransformation #DilawarJavaid #DeploymentIssues #CloudSolutions #InfrastructureAsCode

  • View profile for Julio Casal

    .NET/Azure Backend • DevOps/Platform Engineering • Developer Productivity • CI/CD • Microservices • Ex-Microsoft

    55,767 followers

    You dockerized your .NET Web apps. Great, but next you'll face these: - How to manage the lifecycle of your containers? - How to scale them? - How to make sure they are always available? - How to manage the networking between them? - How to make them available to the outside world? To deal with those, you need 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀, the container orchestration platform designed to manage your containers in the cloud. I started using Kubernetes about 6 years ago when I joined the ACR team at Microsoft, and never looked back. It's the one thing that put me ahead of my peers given the increasing move to Docker containers and cloud native development. Every single team I joined since then used Azure Kubernetes Service (AKS) because of the impressive things you can do with it like: - Quickly scale your app up and down as needed - Ensure your app is always available - Automatically distribute traffic between containers - Roll out updates and changes fast and with zero downtime - Ensure the resources on all boxes are used efficiently How to get started? Check out my step-by-step AKS guide for .NET developers here 👇 https://lnkd.in/gBPJT6wv Keep learning!

  • View profile for Eswar Sai Kumar L.

    Cloud and DevOps Enthusiast • AWS Certified Solutions Architect and Cloud Practitioner

    1,958 followers

    🚀 End-to-End DevOps Project on AWS I recently completed a cloud-native DevOps project where I built and deployed a full-stack application using Terraform, Jenkins, Docker and Kubernetes on AWS. 🔗 GitHub Repo: 👉 https://lnkd.in/g7G2Cd-v Here’s a breakdown of what I implemented: 🏗️ 𝗜𝗻𝗳𝗿𝗮𝘀𝘁𝗿𝘂𝗰𝘁𝘂𝗿𝗲 𝗮𝘀 𝗖𝗼𝗱𝗲 – 𝗧𝗲𝗿𝗿𝗮𝗳𝗼𝗿𝗺 • Used Terraform to automate infrastructure provisioning with state management and locking enabled through AWS S3. ✅ Resources created: • VPC with 3 subnets: • Public Subnet → Bastion Host, VPN, ALB (Ingress Controller) • Private Subnet → EKS Cluster • DB Subnet → RDS (MySQL) • Integrated with Route53 (DNS), CDN, and EFS for persistent storage. ☸️ 𝗞𝘂𝗯𝗲𝗿𝗻𝗲𝘁𝗲𝘀 𝗔𝗿𝗰𝗵𝗶𝘁𝗲𝗰𝘁𝘂𝗿𝗲 – 𝗘𝗞𝗦 • Traffic enters through AWS ALB, handled by Ingress Controller • Routed to microservices via Kubernetes Services • Used Deployments, ConfigMaps, and Helm for management • Persistent data handled using EFS volumes via PVCs • Followed clean microservices architecture for separation of concerns 🚀 𝗖𝗜/𝗖𝗗 𝗣𝗶𝗽𝗲𝗹𝗶𝗻𝗲 – 𝗝𝗲𝗻𝗸𝗶𝗻𝘀 • Set up a complete CI/CD pipeline triggered by GitHub webhooks. Jenkins pipeline includes: 1. Dependency installation 2. Code analysis with SonarQube 3. Infra provisioning using Terraform 4. Docker image build & push to Amazon ECR 5. Kubernetes deployment using Helm 📌 This project helped me understand the real-world DevOps workflow, from infrastructure setup to CI/CD automation and scalable deployments on EKS. 🔗 GitHub Repo: 👉 https://lnkd.in/g7G2Cd-v 🔁 Repost if you found it useful #AWS #DevOps #Terraform #Jenkins #EKS #Kubernetes #CICD #CloudComputing #InfrastructureAsCode #Helm #SonarQube #ECR #EFS #Route53 

  • View profile for Tauseef Fayyaz

    Teaching skills that land you a job • Team Lead & Full Stack Engineer • Career Coach • 0.1M+ learners • DM for collabs & promos • Building Sefism • Socials → @tauseeffayyaz (𝕏 & FB append 0)

    87,660 followers

    Deployment patterns are systematic approaches for safely introducing new application features to users. The success of minimizing downtime depends on the deployment method employed, while some patterns also facilitate the gradual rollout of additional functionality. These strategies enable thorough testing with a limited user group before making features widely available. 1. Canary Releases:   - Description: Canary releases are a strategy for identifying potential issues before they impact all users. Rather than immediately exposing a new feature to everyone, it's initially made available to a select user group.   - Process: Continuous monitoring occurs after the canary release, and any issues discovered are promptly addressed. Once the release proves stable, it's transitioned to the full production environment.   - Importance: Canary releases are a key element of continuous deployments. 2. Blue/Green Deployments:   - Description: Blue/green deployments involve maintaining two nearly identical environments concurrently, reducing risk and minimizing downtime. These environments are referred to as "blue" and "green," with only one active at any given time.   - Process: A router or load balancer is used to control traffic between the blue and green environments, enabling rapid rollback if issues arise.   - Variation: Red/Black deployments are similar, with the "red" version live in production and the "black" version deployed to servers. Traffic is directed to the black version when it's operational, offering a strict one-version-at-a-time approach. 3. Feature Toggles:   - Description: Feature toggles enable runtime activation or deactivation of specific features. This allows the introduction of new software without exposing users to unfamiliar or modified functionality.   - Use: Feature toggles support continuous deployments by separating software releases from feature deployments. 4. A/B Testing:   - Description: A/B testing involves comparing the performance of two app versions to determine which one is more effective. It's akin to conducting an experiment.   - Process: Users are randomly presented with different page versions, and statistical analysis is employed to assess which variant better accomplishes objectives. 5. Dark Launches:   - Description: In a dark launch, a new feature is introduced exclusively to a select group of users who are aware that they are assisting in testing the functionality.   - Purpose: The term "dark launch" signifies that users are exposed to the feature for feedback and effectiveness testing. These deployment strategies offer structured ways to manage the release of new features and updates, ensuring minimal disruption while maintaining the quality and stability of software applications. Pic by Tech World With Milan #deployments #patterns #systemdesign #interviewtips #acrhictecture

Explore categories