<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:media="http://search.yahoo.com/mrss/"><channel><title><![CDATA[Keelan Cannoo]]></title><description><![CDATA[Exploring knowledge and embracing learning in open source, Linux and beyond. Dive into insightful guides and reflections that inspire curiosity and growth!]]></description><link>https://keelancannoo.com/</link><generator>Ghost 5.88</generator><lastBuildDate>Mon, 27 Apr 2026 19:33:08 GMT</lastBuildDate><atom:link href="https://keelancannoo.com/rss/" rel="self" type="application/rss+xml"/><ttl>60</ttl><item><title><![CDATA[Three Days at Lianqiu Lake]]></title><description><![CDATA[A quiet reflection on an unexpected trip to Shanghai, three days at Huawei’s Lianqiu Lake R&D Center and the lessons that stayed beyond the technical sessions.]]></description><link>https://keelancannoo.com/three-days-at-lianqiu-lake/</link><guid isPermaLink="false">69709e6306e3e8d51ffd37a7</guid><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Fri, 30 Jan 2026 14:47:55 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2026/01/lianqiu.webp" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2026/01/lianqiu.webp" alt="Three Days at Lianqiu Lake"><p>It started as a typical Wednesday. </p><p>I was deep in the usual routine when the news hit. In just six days, I would be traveling to one of the most technologically advanced countries in the world.</p><p>The destination was Shanghai, specifically the Lianqiu Lake R&amp;D Center, to attend Huawei&#x2019;s Pacific Plan Partner Training.</p><h2 id="a-campus-built-at-city-scale">A Campus Built at City Scale</h2><p>I spent three days at the newly opened Lianqiu Lake R&amp;D Center. This is not a conventional corporate site. It is a purpose built research environment operating at city scale.</p><p>Spanning approximately 2,600 acres, it exceeds the size of Apple&#x2019;s and Microsoft&#x2019;s primary campuses combined. Yet despite its scale, it never felt overwhelming. One of the coolest parts is a vintage-style red tram system that connects the different parts.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2026/01/red-train.jpeg" class="kg-image" alt="Three Days at Lianqiu Lake" loading="lazy" width="1206" height="2116" srcset="https://keelancannoo.com/content/images/size/w600/2026/01/red-train.jpeg 600w, https://keelancannoo.com/content/images/size/w1000/2026/01/red-train.jpeg 1000w, https://keelancannoo.com/content/images/2026/01/red-train.jpeg 1206w" sizes="(min-width: 720px) 720px"></figure><h2 id="training-depth-and-perspective">Training Depth and Perspective</h2><p>The Pacific Plan training brought together partners from diverse regions and technical backgrounds. The sessions were structured to focus less on configurations and more on architectural intent, trade-offs and system-level implications.</p><p>Questions were explored through reasoning rather than product features. This approach exposed gaps in my own understanding and provided a clearer sense of what I need to strengthen next.</p><h2 id="three-days-of-deep-technical-focus">Three Days of Deep Technical Focus</h2><p>The three days followed a clear structure, each building on a different layer of the stack.</p><p>The first day focused mainly on DCS and full stack solutions, setting the foundation for how infrastructure components come together as complete systems rather than isolated products. The emphasis was on understanding how these pieces fit and how they shape broader architectures.</p><p>The second day went deeper into data centric workloads. This included AI data lake solutions, data protection products and architectures, OceanStor Dorado all flash storage platforms, along with discussions around OLTP databases and Kunpeng computing. Instead of treating these topics separately, the sessions showed how performance, protection, and compute decisions tend to overlap in real environments.</p><p>The final day shifted toward commercial and distribution market products and solutions, helping connect the earlier technical discussions to practical deployment and market realities.</p><p>Across the three days, the focus gradually moved away from individual technologies and toward understanding why systems are designed the way they are, with technical choices tied back to constraints, use cases, and long term impact.</p><h2 id="learning-beyond-the-room">Learning Beyond the Room</h2><p>Some of the most valuable learning came from simply hearing about real projects being rolled out across different continents. Partners shared firsthand experiences from deployments in very different markets, each with its own constraints, scale, and regulatory realities.</p><p>Those conversations added practical context to the theory and sparked ideas for approaches I could realistically adapt and apply in my own environment.</p><h2 id="embracing-my-inner-po">Embracing My Inner Po</h2><p>Not everything on campus was purely technical. After the day wrapped up, we spent an hour learning Kung Fu. </p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2026/01/kung-fu-panda-po-practicing-moves-6jpjj7tnsmynv50c.webp" class="kg-image" alt="Three Days at Lianqiu Lake" loading="lazy" width="480" height="480"></figure><p>Between that and the impressive range of campus food options, it was hard not to lean into my inner Po.</p><h2 id="an-unexpected-perspective">An Unexpected Perspective</h2><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2026/01/shanghai.png" class="kg-image" alt="Three Days at Lianqiu Lake" loading="lazy" width="1184" height="864" srcset="https://keelancannoo.com/content/images/size/w600/2026/01/shanghai.png 600w, https://keelancannoo.com/content/images/size/w1000/2026/01/shanghai.png 1000w, https://keelancannoo.com/content/images/2026/01/shanghai.png 1184w" sizes="(min-width: 720px) 720px"></figure><p>Visiting Shanghai was not something I had planned or put on my bucket list this year, but I am grateful for the experience.</p><p>It offered more than technical exposure. It provided distance from routine, space to reflect, and a clearer view of where my thinking needs to mature. Sometimes you have to go halfway across the world and practice some Kung Fu to realize exactly where you need to grow.</p>]]></content:encoded></item><item><title><![CDATA[Exploring Plausible Analytics]]></title><description><![CDATA[Looking for a lightweight, privacy-focused alternative to Google Analytics? Plausible Analytics offers simple, cookie-free tracking that respects your users' privacy. Plus, it’s open-source, so you can even self-host it for complete control over your data.]]></description><link>https://keelancannoo.com/exploring-plausible-analytics/</link><guid isPermaLink="false">66f6bbe0c51184f1c7ae1557</guid><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Sun, 29 Sep 2024 11:55:44 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/09/idk4lfuNlY_1727610714445.png" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2024/09/idk4lfuNlY_1727610714445.png" alt="Exploring Plausible Analytics"><p>A while back, I <a href="https://keelancannoo.com/from-ground-zero-to-go-live-guide-to-setting-up-ghost-cms-on-linux/" rel="noreferrer">set up Ghost CMS on my Linux server</a> to power my blog. After getting the site running smoothly, I realized I needed a reliable analytics tool to track visitors and see how my content was performing. But I wasn&#x2019;t looking for just any tool&#x2014;I wanted something simple, privacy-focused and easy to integrate with my setup. That&#x2019;s when I discovered <a href="https://plausible.io/?ref=keelancannoo.com" rel="noreferrer">Plausible Analytics</a>.</p><h2 id="why-i-chose-plausible-for-tracking">Why I Chose Plausible for Tracking</h2><p>Running a website means understanding what&#x2019;s working and what isn&#x2019;t but I wasn&#x2019;t willing to compromise my visitors&apos; privacy to get those insights. Most traditional analytics tools rely on cookies, complex data mining and extensive user tracking, which didn&#x2019;t align with what I was comfortable with.</p><p>When I started looking into alternatives, I wanted something that prioritized privacy and was easy to use&#x2014;without the clutter of unnecessary data. Plausible stood out to me for a few key reasons:</p><ul><li><strong>Privacy-Focused</strong>: The biggest selling point was Plausible&#x2019;s no-cookie approach. It doesn&#x2019;t use invasive tracking or collect personal data, which means it complies with privacy laws like GDPR, CCPA and PECR. I could gather the insights I needed while respecting my audience&#x2019;s privacy.</li><li><strong>Open Source</strong>: Since Plausible is open source, I had the option to self-host it which gave me complete control over my data. No need to hand it over to a big corporation that might store or sell it for their own purposes. It&#x2019;s reassuring to know that my data stays with me, on my own server.</li><li><strong>Simplicity</strong>: Many analytics tools overwhelm you with data you don&#x2019;t need, but Plausible has an incredibly clean, intuitive interface. It highlights key metrics like unique visitors, page views and bounce rates, making it easy to see what matters without getting lost in a sea of charts and filters.</li></ul><p>For a detailed comparison between Plausible and Google Analytics, you can check out their <a href="https://plausible.io/vs-google-analytics?ref=keelancannoo.com" rel="noopener">official website</a>.</p><h2 id="setting-it-up">Setting It Up</h2><p>Setting up Plausible on my server was very straightforward. After installing Docker and cloning the <a href="https://github.com/plausible/community-edition/?ref=keelancannoo.com" rel="noreferrer">Plausible Community Edition repository</a>, I followed the instructions on GitHub and had it running in no time. The tracking script was simple to add to Ghost CMS&#x2014;just a matter of pasting it into the header section. I also pointed my DNS to the Plausible server and everything worked flawlessly.</p><h2 id="using-plausible-analytics">Using Plausible Analytics</h2><p>Once Plausible was integrated, I started exploring the dashboard and honestly I was impressed. Even though the interface is minimal, it still delivers the core metrics that matter:</p><ul><li><strong>Unique visitors</strong>: See how many people are visiting your site.</li><li><strong>Page views</strong>: Track the number of times pages are loaded.</li><li><strong>Bounce rates</strong>: Understand how many people leave your site after viewing just one page.</li></ul><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/09/image.png" class="kg-image" alt="Exploring Plausible Analytics" loading="lazy" width="1149" height="692" srcset="https://keelancannoo.com/content/images/size/w600/2024/09/image.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/09/image.png 1000w, https://keelancannoo.com/content/images/2024/09/image.png 1149w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/09/image-1.png" class="kg-image" alt="Exploring Plausible Analytics" loading="lazy" width="1137" height="958" srcset="https://keelancannoo.com/content/images/size/w600/2024/09/image-1.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/09/image-1.png 1000w, https://keelancannoo.com/content/images/2024/09/image-1.png 1137w" sizes="(min-width: 720px) 720px"></figure><p>These insights have helped me better understand what content is engaging my readers and which posts are driving the most traffic. The best part is how easy it is to navigate&#x2014;no unnecessary clutter or overly technical reports, just the data I care about.</p><h2 id="a-quick-shoutout-to-fathom">A Quick Shoutout to Fathom</h2><p>While exploring options, I also came across <a href="https://usefathom.com/?ref=keelancannoo.com" rel="noreferrer">Fathom Analytics</a>. Like Plausible, Fathom emphasizes privacy and simplicity.</p><h2 id="my-takeaway">My Takeaway</h2><p>Overall, my experience with Plausible has been really positive. It helps me focus on the metrics that matter without the noise and it&#x2019;s comforting to know that I&#x2019;m respecting my visitors&#x2019; privacy while gaining valuable insights.</p><p>If you&#x2019;re looking for an analytics tool that prioritizes privacy and simplicity, I highly recommend checking out Plausible. It could be just what you need for your website. It&#x2019;s especially ideal for those who want a lightweight, ethical approach to web tracking.</p>]]></content:encoded></item><item><title><![CDATA[Learn About ICANN's Next Round of New gTLDs and Applicant Support Program]]></title><description><![CDATA[On September 9th, 2024, Cyberstorm.mu attended an ICANN meeting discussing the next round of new gTLDs and the Applicant Support Program, aimed at making gTLDs more accessible to underserved regions.]]></description><link>https://keelancannoo.com/learn-about-icanns-next-round-of-new-gtlds-and-applicant-support-program/</link><guid isPermaLink="false">66dc27afb6e9f319516635c3</guid><category><![CDATA[Events]]></category><category><![CDATA[Cyberstorm.mu]]></category><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Tue, 10 Sep 2024 12:25:04 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/10/icann_meeting.webp" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2024/10/icann_meeting.webp" alt="Learn About ICANN&apos;s Next Round of New gTLDs and Applicant Support Program"><p>On September 9th, 2024, <a href="https://cyberstorm.mu/?ref=keelancannoo.com" rel="noreferrer">Cyberstorm.mu</a> had the privilege of attending a closed meeting at Labourdonnais Waterfront Hotel, where <a href="https://www.icann.org/?ref=keelancannoo.com" rel="noreferrer">ICANN</a> (Internet Corporation for Assigned Names and Numbers) presented the roadmap for their upcoming Next Round of New Generic Top-Level Domains<strong> </strong>(ngTLDs) and the Applicant Support Program. Mauritius is the first country to host an in-person discussion about the next round of new gTLDs, set to open in April 2026. Previous discussions were held online, with more face-to-face sessions planned in other countries.</p><h2 id="icann-and-its-role">ICANN and Its Role</h2><p>For those unfamiliar, ICANN plays a crucial role in managing the DNS, which is essential to how the Internet functions. They ensure that domain names, IP addresses and top-level domains (TLDs) work together securely and efficiently. Without ICANN&#x2019;s oversight, the Internet as we know it wouldn&#x2019;t function as smoothly. Any changes or expansions they propose can directly shape the future of the online world, especially in underserved regions like Africa.</p><h2 id="the-focus-on-africa-opportunities-and-challenges">The Focus on Africa: Opportunities and Challenges</h2><p>One of the key points raised was Africa&apos;s underwhelming participation in the last round of <a href="https://newgtlds.icann.org/sites/default/files/applications-overview-13jun12-en.pdf?ref=keelancannoo.com" rel="noreferrer">gTLD applications back in 2012</a>. Out of <a href="https://newgtlds.icann.org/en/program-status/statistics?ref=keelancannoo.com" rel="noreferrer">1930 applications</a> globally, only 17 came from Africa, most of which were from South Africa. This imbalance was flagged as a serious issue. ICANN officials, including Pierre Dandjinou, Vice President of Stakeholder Engagement for the Africa region, emphasized the importance of empowering African nations to transition from being mere consumers of the Internet to becoming producers, capable of shaping the future of their digital economies and improving global representation.</p><h2 id="understanding-the-top-level-domains-tlds">Understanding the Top-Level Domains (TLDs)</h2><p>ICANN&apos;s Senior Director of ngTLD Program, Bob Ochieng, provided an overview of the different categories of domain names:</p><ul><li>ccTLDs (Country-Code Top-Level Domains): These are two-letter domains like .mu for Mauritius or .ke for Kenya.</li><li>gTLDs (Generic Top-Level Domains): Domains such as .com, .org or .info fall under this category. In the last round of applications, the number of gTLDs grew from 22 to over 1200.</li><li>IDNs (Internationalized Domain Names): These can be either ccTLDs or gTLDs and use non-Latin scripts, allowing domain names to be written in languages like Arabic, Chinese and Japanese. For example, &#x43F;&#x440;&#x430;&#x432;&#x438;&#x442;&#x435;&#x43B;&#x44C;&#x441;&#x442;&#x432;&#x43E;.&#x440;&#x444; translates to government.ru.</li></ul><p>Bob highlighted some challenges with new gTLDs. Domains like .africa or .capetown, while relevant, might face technical issues. For instance, longer domain names sometimes face issues in email deliverability, where emails can get dropped along the way.</p><h2 id="applying-for-a-gtld"><strong>Applying for a gTLD</strong></h2><p>Applying for a gTLD means taking on the responsibility of operating a domain registry, which involves significant technical and operational tasks. If you don&#x2019;t have the technical capabilities to operate a gTLD registry, you can partner with an accredited Registry Service Provider (RSP) for a fee. RSPs offer the necessary infrastructure and expertise to manage the technical and operational aspects of running a gTLD.</p><p>Even if you have the technical capacity to manage a gTLD, you need to be accredited by ICANN to ensure that you meet the same high standards as all other operators. This accreditation process is crucial for maintaining uniformity and reliability across the global domain name system.</p><p>Managing a gTLD is a significant responsibility with substantial costs. For example, if a ccTLD like .mu goes down, it affects all users relying on that domain, from businesses to individuals. The same risk applies to gTLDs, making the role of a registry operator crucial and costly. This example underscores the high stakes involved in operating a TLD. This makes the domain system essential not just for personal or business use but for the national economy.</p><h2 id="why-should-this-matter">Why Should This Matter?</h2><p>While .com is the dominant domain in North America, ccTLDs are more prevalent in other regions like Europe, Asia and Africa. In Africa, the market has grown to 7 million domain names in 2022. Compare that to around 20 million in China and you see the vast opportunity Africa has to improve.</p><p>Encouraging more African countries to participate in gTLDs is about more than just building Internet infrastructure&#x2014;it&#x2019;s about generating new revenue streams, asserting control over local digital policies and enhancing credibility in the global digital space. As Bob highlighted, gTLDs offer developing nations the potential to elevate their digital sovereignty and create monetization opportunities.</p><h2 id="the-applicant-support-program-%E2%80%93-making-gtlds-accessible">The Applicant Support Program &#x2013; Making gTLDs Accessible</h2><p>One of the standout issues from the 2012 round was the high cost of applying for a gTLD, with application fees set at $185,000 (Note that the application fees for the next round of new gTLDs is expected to be to be around $220,000). This posed a significant barrier for many organizations, especially in developing economies. To address this, the Applicant Support Program was enhanced to make applying for a gTLD more accessible to nonprofits, indigenous organizations, micro and small businesses and others who need financial assistance. Here&#x2019;s how it works:</p><ul><li><strong>Eligibility Criteria</strong>: Applicants need to meet certain criteria, including financial need and viability.</li><li><strong>Commitment Fee</strong>: The $1,500 commitment fee helps ensure that applicants are serious about their application.</li><li>Successful applicants under the program receive an 85% reduction in the application fee.</li></ul><p>The program offers financial support making the process more accessible to smaller players.</p><h2 id="what-happens-next">What Happens Next?</h2><p>Looking forward, there are some key dates to keep in mind:</p><ul><li><strong>November 19, 2024 &#x2013; November 19, 2025</strong>: The application window for the Applicant Support Program will be open. You can apply to receive support without disclosing the specific domain name you&#x2019;re applying for, maintaining confidentiality.</li><li><strong>May 2025</strong>: The Applicant Guidebook will be released, outlining the rules and requirements.</li><li><strong>April 2026</strong>: General applications for gTLDs will be accepted.</li></ul><p>This new round of gTLDs will open up the possibility for more inclusive domain names, providing opportunities for groups to create their own space online.</p><p>More information can be found at <a href="https://newgtldprogram.icann.org/en?ref=keelancannoo.com">https://newgtldprogram.icann.org/en</a>.</p><h2 id="final-thoughts">Final Thoughts</h2><p>ICANN&apos;s focus on global diversity and support for underserved regions is crucial for making the Internet more inclusive and representative.</p><p>For Mauritius and Africa, this initiative transcends domain names&#x2014;it represents digital sovereignty and the chance to create opportunities for local communities. We are not just passive users of the Internet but have the potential to be active contributors to the global digital economy.</p><p>Let&#x2019;s seize this opportunity to drive forward a more diverse, inclusive and locally-driven Internet landscape.</p><figure class="kg-card kg-embed-card"><blockquote class="twitter-tweet"><p lang="en" dir="ltr">Thank you, merci, and mersi to the organizers of the Africa Internet Summit <a href="https://twitter.com/hashtag/AIS24?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=keelancannoo.com">#AIS24</a>, <a href="https://twitter.com/AfNOGWorkshops?ref_src=twsrc%5Etfw&amp;ref=keelancannoo.com">@AfNOGWorkshops</a>, for a rewarding week in Mauritius.<br>We appreciate everyone&apos;s participation as we continue working toward building a more diverse and inclusive African Internet ecosystem. <a href="https://twitter.com/hashtag/ICANN?src=hash&amp;ref_src=twsrc%5Etfw&amp;ref=keelancannoo.com">#ICANN</a> <a href="https://t.co/b3MqtcalvS?ref=keelancannoo.com">pic.twitter.com/b3MqtcalvS</a></p>&#x2014; ICANN (@ICANN) <a href="https://twitter.com/ICANN/status/1834599900861903353?ref_src=twsrc%5Etfw&amp;ref=keelancannoo.com">September 13, 2024</a></blockquote>
<script async src="https://platform.twitter.com/widgets.js" charset="utf-8"></script></figure>]]></content:encoded></item><item><title><![CDATA[Peter Schwabe at a Cyberstorm.mu Social Event]]></title><description><![CDATA[Cyberstorm.mu welcomed Peter Schwabe for an insightful evening where he shared his expertise on cryptography.]]></description><link>https://keelancannoo.com/peter-schwabe-at-a-cyberstorm-mu-social-event/</link><guid isPermaLink="false">66dc1553b6e9f3195166359e</guid><category><![CDATA[Cyberstorm.mu]]></category><category><![CDATA[Cryptography]]></category><category><![CDATA[Events]]></category><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Sun, 08 Sep 2024 10:09:47 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/09/26e05a3b-5e32-436f-b46c-037b43da6832--2-.jpg" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2024/09/26e05a3b-5e32-436f-b46c-037b43da6832--2-.jpg" alt="Peter Schwabe at a Cyberstorm.mu Social Event"><p>Recently, <a href="https://cyberstorm.mu/?ref=keelancannoo.com" rel="noreferrer">Cyberstorm.mu</a> hosted a social event, featuring none other than <a href="https://cryptojedi.org/peter/index.shtml?ref=keelancannoo.com" rel="noreferrer">Peter Schwabe</a>, Scientific Director at the Max Planck Institute for Security and Privacy (MPI-SP) and part-time professor at Radboud University. He was joined by his partner <a href="https://www.veelasha.org/index.shtml?ref=keelancannoo.com" rel="noreferrer">Veelasha Moonsamy</a> who is also active in research. As a co-author of the post-quantum cryptographic scheme <a href="https://pq-crystals.org/kyber/?ref=keelancannoo.com" rel="noreferrer">CRYSTALS-Kyber</a>, which has been selected for standardization by NIST, Peter&apos;s visit was a unique opportunity for our local community to engage with a renowned expert in the field.</p><h2 id="a-relaxed-evening-with-a-cryptography-expert"><strong>A Relaxed Evening with a Cryptography Expert</strong></h2><p>The event took place in the welcoming setting of Happy Rajah in Grand Baie, providing an ideal atmosphere for engaging discussions. Over a delightful dinner, Peter Schwabe shared his experiences and insights into the world of post-quantum cryptography.</p><h3 id="discussions-on-cryptography"><strong>Discussions on Cryptography</strong></h3><p>In our conversations, Peter highlighted the importance of using specialized languages like <a href="https://github.com/jasmin-lang/jasmin?ref=keelancannoo.com" rel="noreferrer">Jasmin</a> for cryptographic implementations. Unlike more popular languages such as C, Go or Rust, Jasmin provides a higher level of assurance in security through formal verification, which is essential for preventing vulnerabilities in cryptographic code.</p><p>We also touched on the <a href="https://formosa-crypto.org/?ref=keelancannoo.com" rel="noreferrer">Formosa</a> project which focuses on creating high-assurance cryptographic software.</p><p>For more detailed insights on these topics, you can refer to Peter Schwabe&apos;s presentation slides from the 2023 CHES conference, available <a href="https://ches.iacr.org/2023/slides/ches-20230911.pdf?ref=keelancannoo.com" rel="noopener">here</a>.</p><h3 id="addressing-skepticism-about-post-quantum-cryptography"><strong>Addressing Skepticism about Post Quantum Cryptography</strong></h3><p>In response to skeptics who question the need for post-quantum algorithms based on doubts about whether quantum computers will ever become a reality, Peter made it clear that these concerns are irrelevant. The fact is, standardization bodies are already advancing post-quantum algorithms regardless of whether quantum computers become a reality. The momentum in this field is undeniable and is driven by concrete actions.</p><p>Moreover, IBM has met all the milestones outlined in its <a href="https://www.ibm.com/roadmaps/quantum.pdf?ref=keelancannoo.com" rel="noreferrer">quantum roadmap</a> thus far, underscoring the tangible progress in the field. The urgency and necessity of developing robust post-quantum cryptographic solutions are clear and these advancements are happening now.</p><h3 id="pioneering-future-research-in-mauritius"><strong>Pioneering Future Research in Mauritius</strong></h3><p>Cyberstorm.mu is committed to fostering collaboration and creating opportunities within the local community. As part of this mission, we invited Anwar Chutoo, a lecturer from the University of Mauritius, to join our event. This initiative aims to support local educational institutions by enhancing their engagement with international cryptography experts, cultivating local talent and strengthening Mauritius&apos;s role in post-quantum cryptography research.</p><h3 id="future-directions"><strong>Future Directions</strong></h3><p>Following our discussions, Peter and Veelasha proposed several initiatives to advance the field in Mauritius. Additionally, plans are underway for Peter to give a talk at the University of Mauritius in the coming weeks, further strengthening the connection between the local academic community and global cryptography research.</p><h3 id="addressing-broader-challenges"><strong>Addressing Broader Challenges</strong></h3><p>The event also included a discussion with Veelasha about the gender gap and challenges faced by researchers in computer science. These important topics highlighted the need for greater inclusivity and support for diverse talent in the field.</p><h3 id="looking-forward"><strong>Looking Forward</strong></h3><p>The conversation with Peter Schwabe has left us with much to think about as we navigate the evolving landscape of cryptographic security. It was a reminder of the importance of staying informed and engaged with the latest advancements in the field.</p><p>Kudos to Cyberstorm.mu for organizing this insightful and engaging evening. We look forward to more opportunities to connect with leading experts and continue our journey into the world of technology and security.</p>]]></content:encoded></item><item><title><![CDATA[Implementing End-to-End Encrypted Backups with Rclone]]></title><description><![CDATA[Learn how to set up a robust, end-to-end encrypted backup strategy for your data using Rclone. This tutorial ensures your valuable information remains secure and your hard work preserved by leveraging the power of cloud storage and encryption.]]></description><link>https://keelancannoo.com/implementing-end-to-end-encrypted-backups-with-rclone/</link><guid isPermaLink="false">66a54aadb6e9f319516633ce</guid><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Sat, 27 Jul 2024 19:31:19 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/08/rclone-1.png" medium="image"/><content:encoded><![CDATA[<h3 id="introduction">Introduction</h3><img src="https://keelancannoo.com/content/images/2024/08/rclone-1.png" alt="Implementing End-to-End Encrypted Backups with Rclone"><p>We all know how important it is to protect our data, whether it&apos;s for personal projects, business documents or, in this example, your blog. Imagine losing all that precious content due to a server failure, hacking attempt or an accidental deletion. Yikes! That&apos;s why having a solid backup strategy is a must. In this guide, I&#x2019;ll show you how to set up end-to-end encrypted backups using rclone. Let&#x2019;s dive in and keep your data safe and sound!</p><h3 id="objectives">Objectives</h3><p>By the end of this guide, you&#x2019;ll be able to:</p><ul><li>Understand the importance of backing up your data.</li><li>Install and configure Rclone for encrypted backups.</li><li>Enable API access and create OAuth credentials for your cloud storage provider (Google Drive).</li><li>Automate the backup process with a script and cron jobs.</li><li>Implement a retention policy to manage storage effectively.</li><li>Verify and monitor your backups regularly.</li></ul><h3 id="1-choose-your-backup-solution">1. Choose Your Backup Solution</h3><p>First things first, let&#x2019;s pick a reliable backup tool. Rclone is fantastic because it supports various cloud storage providers like Google Drive, Dropbox, OneDrive and more. Plus it offers top-notch encryption to keep your data secure whether it&#x2019;s on the move or sitting in the cloud.</p><h3 id="2-installing-rclone">2. Installing Rclone</h3><p>Time to get Rclone up and running. Download it from the <a href="https://rclone.org/?ref=keelancannoo.com">official website</a> or use package managers like <code>apt</code> for Linux or <code>brew</code> for macOS.</p><pre><code class="language-bash">sudo -v ; curl https://rclone.org/install.sh | sudo bash</code></pre><p>Verify the installation to make sure it&#x2019;s all set:</p><pre><code class="language-sh">rclone --version
</code></pre><h3 id="3-enable-api-access">3. Enable API Access</h3><p>Next, we need to enable API access for your cloud storage provider. We&#x2019;ll use Google Drive as an example but feel free to adapt this for your preferred service.</p><h4 id="enable-google-drive-api">Enable Google Drive API</h4><ol><li>Head over to the <a href="https://console.cloud.google.com/?ref=keelancannoo.com">Google Cloud Console</a>.</li><li>Create a new project or select an existing one.</li><li>Navigate to <strong>APIs &amp; Services &gt; Library</strong>.</li><li>Search for &quot;Google Drive API&quot; and enable it.</li></ol><h3 id="4-create-oauth-credentials">4. Create OAuth Credentials</h3><p>To securely access Google Drive (or another service), you need to <a href="https://rclone.org/drive/?ref=keelancannoo.com#making-your-own-client-id" rel="noreferrer">create OAuth credentials</a>. Don&#x2019;t worry, it&#x2019;s not as tricky as it sounds!</p><p>OAuth 2.0 is the de facto industry standard for online authorization, enabling applications like social media integrations (e.g., allowing a third-party app to post on your behalf on Facebook) without sharing your login credentials. This ensures secure access to your data while maintaining privacy and control over how it&apos;s used. While you can use the default Rclone credentials, it comes with limits.</p><h4 id="configure-oauth-consent-screen">Configure OAuth Consent Screen</h4><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/08/image.png" class="kg-image" alt="Implementing End-to-End Encrypted Backups with Rclone" loading="lazy" width="1849" height="566" srcset="https://keelancannoo.com/content/images/size/w600/2024/08/image.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/08/image.png 1000w, https://keelancannoo.com/content/images/size/w1600/2024/08/image.png 1600w, https://keelancannoo.com/content/images/2024/08/image.png 1849w" sizes="(min-width: 720px) 720px"></figure><ol><li>Navigate to the Google Cloud Console and click on &quot;Manage&quot; for the Google Drive API.</li><li>Go to <strong>APIs &amp; Services</strong> &gt; <strong>OAuth consent screen</strong>.</li><li>Choose <strong>External</strong> or <strong>Internal</strong> based on your needs and click <strong>Create</strong>. In this case, select External.</li><li>Fill out the <strong>App Information</strong> (e.g., app name, user support email).</li><li>Under <strong>Scopes</strong>, add the following scopes:</li></ol><ul><ul><li><code>https://www.googleapis.com/auth/docs</code></li><li><code>https://www.googleapis.com/auth/drive</code></li><li><code>https://www.googleapis.com/auth/drive.metadata.readonly</code></li></ul></ul><ol start="7"><li>Save and continue through the remaining setup steps.</li><li>Add your own account to the test users and publish your app.</li></ol><h4 id="generating-client-id-and-secret">Generating Client ID and Secret</h4><ol><li>Go to the <a href="https://console.cloud.google.com/?ref=keelancannoo.com">Google Cloud Console</a>.</li><li>Navigate to <strong>APIs &amp; Services &gt; Credentials &gt; Create Credentials &gt; OAuth client ID</strong>.</li><li>Select &quot;Desktop app&quot; as the application type.</li><li>Note down the generated Client ID and Client Secret for Rclone configuration. You can also download the client secret JSON file.</li></ol><h3 id="5-configure-rclone">5. Configure Rclone</h3><p>Now that we&#x2019;ve got the OAuth credentials, let&#x2019;s configure Rclone to use them.</p><p>Note that OAuth requires a web browser for authorization. If you are using a headless machine, there are <a href="https://rclone.org/remote_setup/?ref=keelancannoo.com" rel="noreferrer">several ways</a> of going about this. You can use SSH tunnel to redirect the headless box port 53682 to the local machine thereby enabling browser access.</p><pre><code class="language-bash">ssh -L localhost:53682:localhost:53682 username@remote_server</code></pre><p>Fire up your terminal and start the Rclone configuration:</p><pre><code class="language-bash">rclone config</code></pre><h4 id="configuration">Configuration</h4><p>Follow the prompts:</p><ol><li>Type <code>n</code> to create a new remote.</li><li>Name the remote (e.g., <code>gdrive</code>).</li><li>Select the cloud storage provider (e.g., <code>17</code> for Google Drive).</li><li>Enter the <code>client_id</code> and <code>client_secret</code> from your downloaded JSON file.</li><li>Leave the rest of the fields as default unless you have specific requirements.</li><li>Open the provided URL in your browser to authorize Rclone and get the token.</li></ol><h3 id="6-set-up-encryption">6. Set Up Encryption</h3><p>Encrypting your backups ensures that even if someone gains access to your cloud storage, they can&#x2019;t read your data without the encryption key. Let&#x2019;s make sure your data stays safe.</p><p><strong>Start Rclone configuration</strong>:</p><pre><code class="language-bash">rclone config</code></pre><h4 id="configure-encrypted-remote">Configure Encrypted Remote</h4><p>Follow the prompts to add encryption:</p><ol><li>Type <code>n</code> to create a new remote.</li><li>Name the remote (e.g., <code>encrypted-gdrive`</code>).</li><li>Select <code>crypt</code> for the storage type.</li><li>Choose the previously configured remote as the remote to encrypt and set the path where the encrypted data will be stored (e.g., <code>gdrive:enc_backup</code>). Ensure the path exists on Google Drive.</li><li>Choose a password and a salt for encryption. Make sure to store these securely.</li></ol><h3 id="7-writing-your-backup-script">7. Writing Your Backup Script</h3><p>Creating a backup script is essential to automate and manage your backup process. Here&apos;s an example of a shell script (<code>backup.sh</code>) that not only performs the backup but also implements a retention policy to manage storage effectively by retaining a specific number of recent backups and deleting older ones.</p><pre><code class="language-bash">#!/bin/bash

# Define variables
DB_NAME=&quot;blog_prod&quot;
GHOST_CONTENT_DIR=&quot;/var/www/blog/content&quot;
BACKUP_DIR=&quot;/home/keelan/backup&quot;
REMOTE_NAME=&quot;encrypted-gdrive&quot;
REMOTE_PATH=&quot;data-backups&quot;
TIMESTAMP=$(date +&quot;%Y%m%d_%H%M%S&quot;)
MYSQL_DUMP=&quot;$BACKUP_DIR/ghost_prod-$TIMESTAMP.sql.gz&quot;
TARBALL=&quot;$BACKUP_DIR/content-$TIMESTAMP.tar.gz&quot;
COMBINED_TARBALL=&quot;$BACKUP_DIR/backup-$TIMESTAMP.tar.gz&quot;
LOG_DIR=&quot;/home/keelan/log&quot;
LOG_FILE=&quot;$LOG_DIR/backup_script.log&quot;
MAX_BACKUPS=20

# Function to log messages
log_message() {
    echo &quot;$(date +&quot;%Y-%m-%d %H:%M:%S&quot;) : $1&quot; &gt;&gt; &quot;$LOG_FILE&quot;
}

# Function to run a command and log it
run_command() {
    local cmd=&quot;$1&quot;
    log_message &quot;$cmd&quot;
    eval &quot;$cmd&quot;
    local status=$?
    if [ $status -ne 0 ]; then
        log_message &quot;Error: Command failed with status $status&quot;
        exit $status
    fi
}

# Create backup directory if it doesn&apos;t exist
mkdir -p &quot;$BACKUP_DIR&quot;

# Create log directory if it doesn&apos;t exist
mkdir -p &quot;$LOG_DIR&quot;

# Dump MySQL database and compress it
run_command &quot;mysqldump $DB_NAME --no-tablespaces | gzip &gt; $MYSQL_DUMP&quot;
log_message &quot;Database backup created and compressed successfully.&quot;

# Create tarball of Ghost content directory
run_command &quot;tar -zcvf $TARBALL -C $GHOST_CONTENT_DIR .&quot;
log_message &quot;Ghost content tarball created successfully.&quot;

# Combine MySQL dump and Ghost content into one tarball
run_command &quot;tar -zcvf $COMBINED_TARBALL -C $BACKUP_DIR $(basename $TARBALL) $(basename $MYSQL_DUMP)&quot;
log_message &quot;Combined tarball created successfully.&quot;

# Copy tarball to Google Drive
run_command &quot;rclone copy $COMBINED_TARBALL $REMOTE_NAME:$REMOTE_PATH&quot;
log_message &quot;Backup successfully copied to Google Drive.&quot;

# Clean up local files
run_command &quot;rm -f $MYSQL_DUMP $TARBALL $COMBINED_TARBALL&quot;

# Clean up old backups
backup_list=$(rclone ls &quot;$REMOTE_NAME&quot;:&quot;$REMOTE_PATH&quot; | sort -r | awk &apos;{print $2}&apos;)
backup_count=$(echo &quot;$backup_list&quot; | wc -l)

if [ $backup_count -gt $MAX_BACKUPS ]; then
  echo &quot;$backup_list&quot; | tail -n +$(($MAX_BACKUPS + 1)) | while read -r backup; do
    run_command &quot;rclone deletefile $REMOTE_NAME:$REMOTE_PATH/$backup&quot;
    log_message &quot;Deleted old backup: $backup&quot;
  done
fi

# Log backup activity
log_message &quot;Backup process completed successfully.&quot;

echo &quot;Backup process completed successfully.&quot;
</code></pre><p>Make the script executable:</p><pre><code class="language-bash">sudo chmod +x backup.sh
</code></pre><p>Creating a dedicated MySQL user for backups ensures that your database access is secure and limited to only the permissions necessary for performing backups.</p><pre><code class="language-sql">mysql&gt; CREATE USER &apos;backup&apos;@&apos;localhost&apos; IDENTIFIED BY &apos;backuppassword&apos;;
mysql&gt; GRANT ALL ON blog_prod.* TO &apos;backup&apos;@&apos;localhost&apos;;
mysql&gt; FLUSH PRIVILEGES;</code></pre><p>Creating a <code>~/.my.cnf</code> file simplifies the backup process by storing the MySQL credentials securely. This file is read automatically by MySQL client tools, so you don&#x2019;t need to enter the username and password every time you run a backup.</p><pre><code class="language-makefile">[client]
user=backup
password=&quot;backuppassword&quot;</code></pre><p>Test the backup process:</p><pre><code class="language-bash">./backup.sh</code></pre><p>Sure enough, it gets uploaded to Google Drive and is encrypted.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/image-15.png" class="kg-image" alt="Implementing End-to-End Encrypted Backups with Rclone" loading="lazy" width="1588" height="399" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/image-15.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/image-15.png 1000w, https://keelancannoo.com/content/images/2024/07/image-15.png 1588w" sizes="(min-width: 720px) 720px"></figure><h3 id="8-scheduling-automated-backups">8. Scheduling Automated Backups</h3><p>Schedule the backup script to run automatically at regular intervals (e.g., daily or weekly) using cron jobs on Linux:</p><pre><code class="language-bash">crontab -e
</code></pre><p>Add the following line to schedule the script to run every Sunday at 1 AM and log the output:</p><pre><code class="language-bash">0 1 * * 0 /home/keelan/backup.sh &gt;&gt; /home/keelan/log/cron.log 2&gt;&amp;1
</code></pre><h3 id="9-testing-and-monitoring-your-backups">9. Testing and Monitoring Your Backups</h3><h4 id="verification">Verification</h4><p>Periodically test your backup strategy by restoring backups to ensure they are complete and functional.</p><h4 id="monitoring">Monitoring</h4><p>Monitor backup logs for any errors or warnings related to backup operations.</p><h3 id="conclusion">Conclusion</h3><p>Your data is more than just files&#x2014;it&#x2019;s your hard work, creativity, and possibly your livelihood. Protect it with a reliable backup strategy using Rclone and your chosen cloud storage provider. By following this guide, you&apos;ve equipped yourself with the tools and knowledge to safeguard your data against unexpected loss. Take charge of your data protection today and ensure continuity in your work and personal projects. With backups in place, you can move forward with confidence, knowing that your content is securely backed up and ready for whatever the future holds.</p>]]></content:encoded></item><item><title><![CDATA[Understanding QUIC and HTTP/3]]></title><description><![CDATA[QUIC and HTTP/3 are setting new standards for web speed and security. Discover how these cutting-edge protocols are transforming your online experience.]]></description><link>https://keelancannoo.com/understanding-quic-and-http-3/</link><guid isPermaLink="false">66901fa0a267760b8939228d</guid><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Fri, 12 Jul 2024 20:47:12 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/08/quic.png" medium="image"/><content:encoded><![CDATA[<h3 id="the-evolution-of-internet-protocols-quic-and-http3">The Evolution of Internet Protocols: QUIC and HTTP/3</h3><img src="https://keelancannoo.com/content/images/2024/08/quic.png" alt="Understanding QUIC and HTTP/3"><p>In the ever-evolving world of the internet, new technologies like QUIC and HTTP/3 are reshaping how we browse and interact online. These protocols represent a significant leap forward in enhancing web performance, security and efficiency.</p><h3 id="understanding-quic">Understanding QUIC</h3><p>QUIC, developed initially by Google and now standardized as IETF QUIC, stands out for its innovative approach to web connections. Unlike traditional TCP which requires a separate TLS handshake to establish encryption, QUIC operates over UDP and includes encryption by default. This design not only speeds up connection establishment but also ensures secure data transmission.</p><h3 id="http3-revolutionizing-web-communications">HTTP/3: Revolutionizing Web Communications</h3><p>HTTP/3 builds upon the foundation of QUIC to improve the Hypertext Transfer Protocol (HTTP). By replacing TCP with QUIC&#x2019;s advanced capabilities, HTTP/3 overcomes traditional limitations. It eliminates head-of-line blocking, a bottleneck where delays in one packet can stall others, thereby accelerating website load times. Additionally, HTTP/3 excels on mobile networks where it efficiently handles latency and packet loss issues, ensuring a seamless user experience. Moreover, QUIC has the ability to persist a connection across network changes.</p><h3 id="cyberstormmu%E2%80%99s-role-in-advancing-quic">Cyberstorm.mu&#x2019;s Role in Advancing QUIC</h3><p><a href="https://cyberstorm.mu/?ref=keelancannoo.com" rel="noreferrer">Cyberstorm.mu</a>, a dedicated community at the forefront of internet technology, has been contributing to the development of QUIC since 2019. Through active participation in IETF hackathons, Cyberstorm has refined QUIC implementations.</p><h4 id="academic-contributions">Academic Contributions</h4><p>In addition to hackathons, Cyberstorm members have pursued groundbreaking research in QUIC. Jeremie Daniel&apos;s 2021 thesis focused on optimizing congestion control within QUIC crucial for maintaining efficient data transmission under heavy network traffic. Keelan Cannoo&#x2019;s 2023 thesis explored the integration of post-quantum cryptography into QUIC, addressing future security challenges posed by quantum computing.</p><h3 id="embracing-the-future-of-web-protocols">Embracing the Future of Web Protocols</h3><p>As major platforms and services embrace HTTP/3, the impact of QUIC and HTTP/3 on the internet landscape becomes increasingly profound. From Google and Facebook to Cloudflare and Uber, adoption of these protocols underscores their transformative potential in delivering faster, more secure and reliable web experiences for users worldwide.</p>]]></content:encoded></item><item><title><![CDATA[From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux]]></title><description><![CDATA[Learn to set up Ghost CMS on Linux from scratch. Follow step-by-step instructions, including Cloudflare integration for security and performance. Get hosting, domain, CMS setup, firewall configuration and launch your website. Let us guide you through this exhilarating journey into website creation!]]></description><link>https://keelancannoo.com/from-ground-zero-to-go-live-guide-to-setting-up-ghost-cms-on-linux/</link><guid isPermaLink="false">6636809439543f0388516690</guid><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Mon, 27 May 2024 18:40:55 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/06/_273ef90a-a4b9-4adb-9074-61149b603f87.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2024/06/_273ef90a-a4b9-4adb-9074-61149b603f87.jpeg" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux"><p>Setting up Ghost CMS on a Linux server from scratch can be an exhilarating journey into website creation. This guide will walk you through the process step-by-step, including integrating Cloudflare for enhanced security and performance. We&apos;ll cover everything from getting hosting and a domain to configuring firewalls and securing your admin dashboard. Let&apos;s dive in!</p><h3 id="objectives">Objectives</h3><p>By the end of this guide, you will:</p><ul><li>Learn how to set up a Ghost CMS on a Linux server.</li><li>Set up your domain with the appropriate DNS configurations.</li><li>Integrate Cloudflare for improved security and performance.</li><li>Configure essential security measures, including disabling root SSH, setting up nftables, ipset and a droplet firewall.</li><li>Add swap memory to your server.</li><li>Backup your Ghost CMS installation regularly.</li><li>Secure the admin dashboard with Cloudflare Zero Trust.</li><li>Monitor your website&apos;s uptime with external tools like UptimeRobot.</li></ul><h3 id="prerequisites">Prerequisites</h3><p>Before proceeding, ensure you have the following:</p><ul><li>A Linux server (preferably Ubuntu 20.04 or later).</li><li>Basic knowledge of using the command line.</li><li>Basic SSH knowledge</li></ul><h3 id="step-1-setting-up-your-hosting-and-domain">Step 1: Setting Up Your Hosting and Domain</h3><ol><li><strong>Choose a Hosting Provider</strong>: Select a hosting provider that supports Linux servers. Popular options include DigitalOcean, AWS and Linode. For this guide, we&apos;ll use DigitalOcean.</li><li><strong>Create a Droplet</strong>: Sign in to your hosting provider and create a new droplet (virtual server) with Ubuntu 22.04. After creation, the droplet will have a public IP address.<br><br>In this case, we will create a droplet with 25GB SSD, 1GB RAM and 1 CPU Core. Choose SSH as the authentication method instead of a password.</li></ol><h4 id="registering-a-domain">Registering a Domain</h4><ol start="3"><li><strong>Domain Registration</strong>: If you don&apos;t have a domain name yet, register one with <a href="https://dash.cloudflare.com/?ref=keelancannoo.com" rel="noreferrer">Cloudflare</a>.  Other services like Namecheap, GoDaddy and Google Domains are popular choices. </li></ol><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/image-9.png" class="kg-image" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux" loading="lazy" width="1810" height="904" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/image-9.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/image-9.png 1000w, https://keelancannoo.com/content/images/size/w1600/2024/07/image-9.png 1600w, https://keelancannoo.com/content/images/2024/07/image-9.png 1810w" sizes="(min-width: 720px) 720px"></figure><h4 id="configuring-dns-settings-on-cloudflare">Configuring DNS Settings on Cloudflare</h4><p>Once registered, point your domain to your server&apos;s IP address. This step is crucial for connecting your domain to your server and making your site accessible online.</p><ol start="4"><li>Configure to use DNSSEC by going at Domain Registrations &gt; Manage Domains &gt; <code>your domain</code> &gt; Configuration</li></ol><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/image-13.png" class="kg-image" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux" loading="lazy" width="1217" height="577" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/image-13.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/image-13.png 1000w, https://keelancannoo.com/content/images/2024/07/image-13.png 1217w" sizes="(min-width: 720px) 720px"></figure><ol start="5"><li>Configure DNS records. Head to websites&gt;<code>your domain</code> &gt;DNS Records. Add A and CNAME records as shown below:</li></ol><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/image-10.png" class="kg-image" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux" loading="lazy" width="1836" height="964" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/image-10.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/image-10.png 1000w, https://keelancannoo.com/content/images/size/w1600/2024/07/image-10.png 1600w, https://keelancannoo.com/content/images/2024/07/image-10.png 1836w" sizes="(min-width: 720px) 720px"></figure><h3 id="step-2-initial-server-configuration">Step 2: Initial Server Configuration</h3><p>After setting up your hosting and domain, it&apos;s time to configure your server for secure operation.</p><ol><li><strong>Access Your Server</strong>: Use SSH to connect to your server.</li></ol><pre><code class="language-bash">ssh root@your_server_ip</code></pre><ol start="2"><li><strong>Create a New User</strong>: For security, create a new user with sudo privileges.</li></ol><pre><code class="language-bash">adduser keelan
usermod -aG sudo keelan</code></pre><ol start="3"><li><strong>Rsync SSH Configuration</strong>: Set up SSH keys and permissions for the new user.</li></ol><pre><code class="language-bash">rsync --archive --chown=keelan:keelan ~/.ssh /home/keelan</code></pre><ol start="4"><li><strong>SSH Hardening:</strong></li></ol><ul><ul><li>Open the SSH configuration file.</li></ul></ul><pre><code class="language-bash">nano /etc/ssh/sshd_config</code></pre><ul><ul><li>To disable root SSH login, find and change <code>PermitRootLogin</code> to <code>no</code>.</li><li>To disable Password Authentication, ensure <code>PasswordAuthentication</code> is set to <code>no</code>.</li><li>Change SSH Port to something else. For example <code>8123</code>.</li></ul></ul><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/2024-07-22_19-14.png" class="kg-image" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux" loading="lazy" width="797" height="891" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/2024-07-22_19-14.png 600w, https://keelancannoo.com/content/images/2024/07/2024-07-22_19-14.png 797w" sizes="(min-width: 720px) 720px"></figure><ol start="5"><li>Restart the SSH service.</li></ol><pre><code class="language-bash">sudo systemctl daemon-reload
sudo systemctl restart ssh.socket
sudo systemctl status ssh.socket ssh.service</code></pre><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text">SSHd&#xA0;now uses&#xA0;socket-based activation (Ubuntu 22.10 and later)<u>.</u></div></div><ol start="6"><li>SSH using newly created user.</li></ol><pre><code class="language-bash">ssh keelan@your_server_ip -p 8123</code></pre><h3 id="step-3-add-swap-memory">Step 3: Add Swap Memory</h3><p>Adding swap memory can help improve your server&apos;s performance by providing additional memory space, which is particularly useful for preventing Out Of Memory (OOM) issues, especially on servers with low RAM.</p><ol><li><strong>Create a Swap File</strong>:</li></ol><pre><code class="language-bash">sudo fallocate -l 2G /swapfile
sudo chmod 600 /swapfile
sudo mkswap /swapfile
sudo swapon /swapfile</code></pre><ol start="2"><li><strong>Make the Swap File Permanent</strong>:</li></ol><p>Add the swap file to <code>/etc/fstab</code>.</p><pre><code class="language-bash">echo &apos;/swapfile none swap sw 0 0&apos; | sudo tee -a /etc/fstab</code></pre><h3 id="step-4-install-mysql-ghost-cli-and-nodejs">Step 4: Install mysql, Ghost CLI and Node.js</h3><p>Following the <a href="https://ghost.org/docs/install/ubuntu/?ref=keelancannoo.com" rel="noreferrer">official ghost installation guide</a>,</p><ol><li>Update packages</li></ol><pre><code class="language-bash">sudo apt-get update
sudo apt-get upgrade</code></pre><ol start="2"><li>Install nginx</li></ol><pre><code class="language-bash">sudo apt-get install nginx
</code></pre><ol start="3"><li>Install and configure mysql</li></ol><pre><code class="language-bash">sudo apt-get install mysql-server
# Enter mysql
sudo mysql
# Update permissions
ALTER USER &apos;root&apos;@&apos;localhost&apos; IDENTIFIED WITH &apos;mysql_native_password&apos; BY &apos;&lt;your-new-root-password&gt;&apos;;
# Reread permissions
FLUSH PRIVILEGES;
# exit mysql
exit</code></pre><ol start="4"><li>Install Node.js</li></ol><pre><code class="language-bash">curl -sL https://deb.nodesource.com/setup_14.x | sudo -E bash -
sudo apt install -y nodejs


# Download and import the Nodesource GPG key
sudo apt-get update
sudo apt-get install -y ca-certificates curl gnupg
sudo mkdir -p /etc/apt/keyrings
curl -fsSL https://deb.nodesource.com/gpgkey/nodesource-repo.gpg.key | sudo gpg --dearmor -o /etc/apt/keyrings/nodesource.gpg

# Create deb repository
NODE_MAJOR=18 # Use a supported version
echo &quot;deb [signed-by=/etc/apt/keyrings/nodesource.gpg] https://deb.nodesource.com/node_$NODE_MAJOR.x nodistro main&quot; | sudo tee /etc/apt/sources.list.d/nodesource.list

# Run update and install
sudo apt-get update
sudo apt-get install nodejs -y</code></pre><ol start="5"><li> Install Ghost CLI</li></ol><pre><code class="language-bash">sudo npm install -g ghost-cli</code></pre><h3 id="step-5-secure-with-cloudflare-ssltls">Step 5: Secure with Cloudflare SSL/TLS</h3><p>In the Overview tab, Set SSL/TLS Encryption Mode to <code>Full (Strict)</code>.</p><h4 id="configure-edge-certificates-on-cloudflare">Configure Edge Certificates on Cloudflare</h4><ol start="7"><ul><li>Go to the <code>Edge Certificates</code> tab in the <code>SSL/TLS</code> section.</li><li>Ensure <code>Always Use HTTPS</code> is enabled to redirect all HTTP traffic to HTTPS.</li><li>Set <code>Minimum TLS Version</code> to <code>TLS 1.2</code>.</li></ul></ol><h4 id="generate-private-key-and-certificate-with-cloudflare">Generate Private Key and Certificate with Cloudflare</h4><ol><li>Go to the <code>Origin Server</code> tab in the <code>SSL/TLS</code> section.</li><li>Click <code>create certificate</code> to generate private key and certificate with Cloudflare.</li></ol><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/2024-07-23_22-47.png" class="kg-image" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux" loading="lazy" width="1659" height="862" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/2024-07-23_22-47.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/2024-07-23_22-47.png 1000w, https://keelancannoo.com/content/images/size/w1600/2024/07/2024-07-23_22-47.png 1600w, https://keelancannoo.com/content/images/2024/07/2024-07-23_22-47.png 1659w" sizes="(min-width: 720px) 720px"></figure><ol start="3"><li>Paste Certificate and Private Key on the Origin Server. </li></ol><pre><code class="language-bash">sudo mkdir -p /etc/ssl/cloudflare
sudo nano /etc/ssl/cloudflare/your_domain.crt
sudo nano /etc/ssl/cloudflare/your_domain.key</code></pre><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/2024-07-23_22-48.png" class="kg-image" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux" loading="lazy" width="1496" height="923" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/2024-07-23_22-48.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/2024-07-23_22-48.png 1000w, https://keelancannoo.com/content/images/2024/07/2024-07-23_22-48.png 1496w" sizes="(min-width: 720px) 720px"></figure><h3 id="step-6-set-up-ghost-cms"><strong>Step</strong> 6: Set Up Ghost CMS</h3><p>With the necessary tools installed, you can now set up Ghost CMS.</p><p>1. <strong>Create a Directory for Ghost</strong></p><pre><code class="language-bash">sudo mkdir -p /var/www/sitename
sudo chown newuser:newuser /var/www/sitename
sudo chmod 775 /var/www/sitename</code></pre><p>2. <strong>Install Ghost</strong></p><pre><code class="language-bash">cd /var/www/sitename
ghost install</code></pre><p>Follow the installation prompts to set up Nginx and SSL. The Ghost CLI will create initial Nginx and SSL configurations which we will adjust manually in the next step to use the SSL certificates from Cloudflare.</p><h3 id="step-7-configure-nginx">Step 7: Configure Nginx</h3><p>Nginx will serve as the reverse proxy for your Ghost CMS. It&apos;s important to configure Nginx to pass the original IP address of clients to your application, use the SSL certificates obtained from Cloudflare, and secure your site by disabling direct IP access.</p><h4 id="block-direct-ip-access">Block Direct IP Access</h4><ol><li>Edit the default Nginx configuration to block direct IP access:</li></ol><pre><code class="language-bash">sudo nano /etc/nginx/sites-available/default</code></pre><pre><code class="language-nginx"># Disable direct IP access
server {
        listen 80 default_server;
        listen [::]:80 default_server;

        listen 443 default_server;
        listen [::]:443 default_server;

        server_name _;
        ssl_reject_handshake on;
        return 444;
}</code></pre><h4 id="configure-nginx-for-ghost">Configure Nginx for Ghost</h4><ol><li><strong>Edit Nginx Configuration for Ghost</strong>:</li></ol><p>Replace with your own domain configuration file.</p><pre><code class="language-bash">sudo nano /etc/nginx/sites-available/curiouskit.dev-ssl.conf </code></pre><ol start="2"><li>Add the following configuration:</li></ol><pre><code class="language-nginx">server {
    listen 443 ssl http2;
    listen [::]:443 ssl http2;

    server_name www.curiouskit.dev curiouskit.dev;

    # setup our access and error logs
    access_log /var/log/nginx/curiouskit.dev.access.log;
    error_log /var/log/nginx/curiouskit.dev.error.log;

    # SSL certificate and key files
    ssl_certificate /etc/ssl/cloudflare/your_domain.crt;
    ssl_certificate_key /etc/ssl/cloudflare/your_domain.key;

    include /etc/nginx/snippets/ssl-params.conf;
    server_tokens off; # Hide Nginx version number

    # Cloudflare IP ranges to log original client IP
    # https://www.cloudflare.com/ips/
    set_real_ip_from 103.21.244.0/22;
    set_real_ip_from 103.22.200.0/22;
    set_real_ip_from 103.31.4.0/22;
    set_real_ip_from 104.16.0.0/12;
    set_real_ip_from 108.162.192.0/18;
    set_real_ip_from 131.0.72.0/22;
    set_real_ip_from 141.101.64.0/18;
    set_real_ip_from 162.158.0.0/15;
    set_real_ip_from 172.64.0.0/13;
    set_real_ip_from 173.245.48.0/20;
    set_real_ip_from 188.114.96.0/20;
    set_real_ip_from 190.93.240.0/20;
    set_real_ip_from 197.234.240.0/22;
    set_real_ip_from 198.41.128.0/17;
    real_ip_header CF-Connecting-IP;

    location / {
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header Host $http_host;
        proxy_pass http://127.0.0.1:2368;
    }

    client_max_body_size 1g;
}</code></pre><h4 id="ssl-parameters">SSL Parameters</h4><p>Open the SSL parameters configuration file:</p><pre><code class="language-bash">sudo nano /etc/nginx/snippets/ssl-params.conf</code></pre><p>Add the following SSL parameters:</p><pre><code class="language-nginx">ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256;
ssl_prefer_server_ciphers on;
ssl_ecdh_curve secp384r1; # Requires nginx &gt;= 1.1.0
ssl_session_cache shared:SSL:10m;
ssl_session_tickets off; # Requires nginx &gt;= 1.5.9
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;
add_header Strict-Transport-Security &apos;max-age=63072000; includeSubDomains; preload&apos;;
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;
ssl_dhparam /etc/nginx/snippets/dhparam.pem;</code></pre><p>3. <strong>Enable the Configuration</strong>:</p><pre><code class="language-bash"># Test the Nginx configuration for syntax errors
sudo nginx -t

# Restart Nginx to apply the changes
sudo systemctl restart nginx</code></pre><h3 id="step-8-configure-firewall">Step 8: Configure firewall</h3><p>To protect your server from unauthorized access, you should configure nftables. This setup will deny all traffic by default, allow necessary traffic, and secure your Ghost CMS server. We&apos;ll use nftables to define rules that handle traffic appropriately.</p><p>Create or update the main nftables configuration file <code>/etc/nftables.conf</code>:</p><pre><code class="language-bash">#!/usr/sbin/nft -f

flush ruleset

# Create the base table and chains
table inet my_table {
    # Define Cloudflare IPs set
    include &quot;/etc/nftables.d/cloudflare-ips.conf&quot;

    chain input {
        type filter hook input priority 0; policy drop;

        # Drop invalid packets.
        ct state invalid drop

        # Allow loopback traffic.
        iifname lo accept

        # Allow all ICMP traffic, but enforce a rate limit
        # to help prevent some types of flood attacks.
        ip protocol icmp limit rate 4/second accept

        # Allow SSH on port 8124
        tcp dport 8124 ct state new,established accept

        # Allow incoming HTTP/HTTPS responses
        tcp sport {80, 443} ct state established accept

        # Allow incoming DNS responses
        udp sport 53 ct state established accept
        tcp sport 53 ct state established accept

        ip saddr $cloudflare_ips tcp dport 443 ct state new,established accept

        # Log and drop everything else
        log prefix &quot;nftables: INPUT DROP &quot; drop
    }

    chain forward {
        type filter hook forward priority 0; policy drop;
    }

    chain output {
        type filter hook output priority 0; policy drop;

        # Allow all traffic on the localhost interface
        oif lo accept

        # Allow established SSH connections
        tcp sport 8124 ct state established accept

        # Allow outgoing HTTP/HTTPS requests
        tcp dport {80, 443} ct state new,established accept

        # Allow outgoing DNS requests
        udp dport 53 ct state new,established accept
        tcp dport 53 ct state new,established accept

        # Allow outgoing ICMP (ping) traffic
        ip protocol icmp accept

        # Allow outgoing HTTPS traffic to established connections
        tcp sport 443 ct state established accept

        # Log and drop everything else
        log prefix &quot;nftables: OUTPUT DROP &quot; drop
    }
}</code></pre><h4 id="cloudflare-ips-configuration">Cloudflare IPs Configuration</h4><p>Ensure that the Cloudflare IPs configuration file <code>/etc/nftables.d/cloudflare-ips.conf</code> is up-to-date:</p><pre><code class="language-bash">define cloudflare_ips = {
    103.21.244.0/22,
    103.22.200.0/22,
    103.31.4.0/22,
    104.16.0.0/13,
    104.24.0.0/14,
    108.162.192.0/18,
    141.101.64.0/18,
    162.158.0.0/15,
    172.64.0.0/13,
    173.245.48.0/20,
    188.114.96.0/20,
    190.93.240.0/20,
    197.234.240.0/22,
    198.41.128.0/17
}</code></pre><h4 id="script-to-update-cloudflare-ips">Script to Update Cloudflare IPs</h4><pre><code class="language-bash">#!/bin/bash

set -e

# Variables
NFTABLES_CONF=&quot;/etc/nftables.conf&quot;
CLOUDFLARE_CONF=&quot;/etc/nftables.d/cloudflare-ips.conf&quot;
BACKUP_DIR=&quot;/etc/nftables.d/backups&quot;
LOG_FILE=&quot;/home/keelan/log/nftables-update.log&quot;
TIMESTAMP=$(date +&quot;%Y%m%d%H%M%S&quot;)

# Create backup directory and log directory if they don&apos;t exist
mkdir -p $BACKUP_DIR
mkdir -p $(dirname $LOG_FILE)
touch $CLOUDFLARE_CONF

# Function to fetch Cloudflare IP ranges
fetch_cloudflare_ips() {
    curl -s https://www.cloudflare.com/ips-v4
}

# Backup existing configuration
cp $NFTABLES_CONF $BACKUP_DIR/nftables.conf.$TIMESTAMP
cp $CLOUDFLARE_CONF $BACKUP_DIR/cloudflare-ips.conf.$TIMESTAMP

# Fetch the latest Cloudflare IP ranges
CLOUDFLARE_IPS=$(fetch_cloudflare_ips)

# Generate new Cloudflare configuration file
NEW_CLOUDFLARE_CONF=$(echo $CLOUDFLARE_IPS | tr &apos; &apos; &apos;\n&apos; | sed &apos;s/^/    /&apos; | sed &apos;$!s/$/,/&apos; | sed &apos;1 i\define cloudflare_ips = {&apos; | sed &apos;$ a\}&apos;)
# Check if the configuration has changed
if ! cmp -s &lt;(echo &quot;$NEW_CLOUDFLARE_CONF&quot;) $CLOUDFLARE_CONF; then
    echo &quot;Changes&quot;
    # Update Cloudflare configuration file
    echo &quot;$NEW_CLOUDFLARE_CONF&quot; &gt; $CLOUDFLARE_CONF

    # Validate the nftables configuration
    nft -c -f $NFTABLES_CONF

    # Apply the updated nftables configuration
    if nft -f $NFTABLES_CONF; then
        echo &quot;$(date +&quot;%Y-%m-%d %H:%M:%S&quot;) - Updated Cloudflare IPs and reloaded nftables configuration&quot; &gt;&gt; $LOG_FILE
    else
        echo &quot;$(date +&quot;%Y-%m-%d %H:%M:%S&quot;) - Failed to reload nftables configuration&quot; &gt;&gt; $LOG_FILE
        # Restore the previous configuration in case of failure
        cp $BACKUP_DIR/nftables.conf.$TIMESTAMP $NFTABLES_CONF
        cp $BACKUP_DIR/cloudflare-ips.conf.$TIMESTAMP $CLOUDFLARE_CONF
        systemctl restart nftables
        echo &quot;$(date +&quot;%Y-%m-%d %H:%M:%S&quot;) - Restored previous nftables configuration due to failure&quot; &gt;&gt; $LOG_FILE
    fi
else
    echo &quot;$(date +&quot;%Y-%m-%d %H:%M:%S&quot;) - No changes in Cloudflare IPs&quot; &gt;&gt; $LOG_FILE
fi</code></pre><h3 id="step-9-set-up-a-droplet-firewall">Step 9: Set Up a Droplet Firewall</h3><p>1. <strong>Use Your Hosting Provider&apos;s Firewall</strong>: Configure the firewall through your hosting provider&#x2019;s dashboard to allow only necessary ports (typically your SSH port, 80, 443).</p><h3 id="step-10-secure-ghost-admin-with-cloudflare-zero-trust">Step 10: Secure Ghost Admin with Cloudflare Zero Trust</h3><p>Restrict access to the Ghost admin panel using Cloudflare Zero Trust while allowing <a href="https://ghost.org/docs/content-api/?ref=keelancannoo.com" rel="noreferrer">Content API</a>.</p><p>1. <strong>Enable Zero Trust</strong>: In the Cloudflare dashboard, go to <a href="https://one.dash.cloudflare.com/?ref=keelancannoo.com" rel="noreferrer">Zero Trust</a> set up Access Policies to restrict access to the Ghost admin panel.</p><p><strong>Create an Access Application for Ghost Admin</strong>:</p><ul><li>Go to <strong>Access &gt; Applications</strong>.</li><li>Click <strong>Add an application</strong> and choose <strong>Self-hosted</strong>.</li><li>Name the application (e.g., &quot;Ghost Admin&quot;).</li><li>Set <strong>Application domain</strong> to <code>yourdomain.com/ghost</code>.</li><li>Click <strong>Next</strong>.</li></ul><p>Access Policy</p><ul><li>Name the policy (e.g., &quot;Protect Ghost Admin&quot;).</li><li>Set <strong>Action</strong> to <strong>Allow</strong>.</li><li>Under <strong>Include</strong>, specify authorized email addresses or groups.</li><li>Click <strong>Next</strong> and <strong>Add application</strong>.</li></ul><h4 id="create-a-bypass-rule-for-the-content-api">Create a Bypass Rule for the Content API</h4><ol start="5"><li><strong>Create a Bypass Application for Content API</strong>:<ul><li>Go to <strong>Access &gt; Applications</strong>.</li><li>Click <strong>Add an application</strong> and choose <strong>Self-hosted</strong>.</li><li>Name the application (e.g., &quot;Ghost Content API&quot;).</li><li>Set <strong>Application domain</strong> to <code>yourdomain.com/ghost/api/content</code>.</li><li>Click <strong>Next</strong>.</li></ul></li><li><strong>Create Bypass Policy</strong>:<ul><li>Name the policy (e.g., &quot;Bypass Ghost Content API&quot;).</li><li>Set <strong>Action</strong> to <strong>Bypass</strong>.</li><li>Under <strong>Include</strong>, select <strong>Everyone</strong>.</li><li>Click <strong>Next</strong> and <strong>Add application</strong>.</li></ul></li></ol><h4 id="verify-configuration">Verify Configuration</h4><ol start="7"><li><strong>Verify Access</strong>:<ul><li>Go to <code>https://yourdomain.com/ghost</code> and ensure authentication is required.</li><li>Access <code>https://yourdomain.com/ghost/api/content</code> to confirm it is publicly accessible.</li></ul></li></ol><h3 id="step-11-regular-backups">Step 11: Regular Backups</h3><p>Ensuring regular backups of your Ghost CMS content is crucial to protect against data loss. For a comprehensive, step-by-step guide on setting up end-to-end encrypted backups using <code>rclone</code>, please refer to my detailed blog post: </p><figure class="kg-card kg-bookmark-card"><a class="kg-bookmark-container" href="https://keelancannoo.com/implementing-end-to-end-encrypted-backups-with-rclone/"><div class="kg-bookmark-content"><div class="kg-bookmark-title">Implementing End-to-End Encrypted Backups with Rclone</div><div class="kg-bookmark-description">Learn how to set up a robust, end-to-end encrypted backup strategy for your data using Rclone. This tutorial ensures your valuable information remains secure and your hard work preserved by leveraging the power of cloud storage and encryption.</div><div class="kg-bookmark-metadata"><img class="kg-bookmark-icon" src="https://keelancannoo.com/content/images/size/w256h256/2024/06/catlogocolor.png" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux"><span class="kg-bookmark-author">Keelan Cannoo</span><span class="kg-bookmark-publisher">Keelan Cannoo</span></div></div><div class="kg-bookmark-thumbnail"><img src="https://keelancannoo.com/content/images/2024/08/rclone-1.png" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux"></div></a></figure><h3 id="step-12-monitor-website-uptime-with-uptimerobot">Step 12: Monitor Website Uptime with UptimeRobot</h3><p>Monitoring your website&apos;s uptime helps you ensure it&apos;s always available to your visitors.</p><ol><li><strong>Sign Up for UptimeRobot</strong>: Register an account at <a href="https://uptimerobot.com/?ref=keelancannoo.com" rel="noreferrer">UptimeRobot</a>.</li><li><strong>Add a New Monitor</strong>:<ul><li>Select &#x201C;HTTP(s)&#x201D; as the monitor type.</li><li>Enter your website URL.</li><li>Set the monitoring interval and enable notifications.</li></ul></li><li>Set up a notification to be sent when your website is back online, ensuring you&apos;re aware of both downtime and recovery events.</li></ol><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/07/image-14.png" class="kg-image" alt="From Ground Zero to Go-Live: Guide to Setting Up Ghost CMS on Linux" loading="lazy" width="1822" height="827" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/image-14.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/image-14.png 1000w, https://keelancannoo.com/content/images/size/w1600/2024/07/image-14.png 1600w, https://keelancannoo.com/content/images/2024/07/image-14.png 1822w" sizes="(min-width: 720px) 720px"></figure><h3 id="conclusion">Conclusion</h3><p>Congratulations! You have successfully set up Ghost CMS on a Linux server with enhanced security and performance using Cloudflare. Your website is now live and ready to handle visitors securely. Remember to regularly update your server and Ghost installation to keep your site secure and running smoothly.</p><p>Happy blogging!</p>]]></content:encoded></item><item><title><![CDATA[Audit of Compression Utilities in Response to the XZ Security Incident]]></title><description><![CDATA[Explore the collaborative effort to audit compression utilities following the discovery of a critical vulnerability in xzutils.]]></description><link>https://keelancannoo.com/collaborative-audit-of-compression-utilities-in-response-to-the-xz-security-incident/</link><guid isPermaLink="false">66786613030e37033b1cbab9</guid><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Mon, 01 Apr 2024 18:16:00 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/06/_b06b804a-dbf4-48eb-8b52-83d00fa45b48.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2024/06/_b06b804a-dbf4-48eb-8b52-83d00fa45b48.jpeg" alt="Audit of Compression Utilities in Response to the XZ Security Incident"><p>In recent cybersecurity news, a significant vulnerability has been uncovered in xzutils, a widely-used compression utility. xzutils is included by default in many Linux distributions. Tracked as CVE-2024-3094, this backdoor affects versions 5.6.0 and 5.6.1 of XZ Utils, posing a serious threat to users globally.</p><h2 id="what-happened">What Happened?</h2><p>The threat actor Jia Tan started contributing to the XZ project almost two years ago, slowly building credibility until he was given maintainer responsibilities. Jia Tan&apos;s rise involved clever social engineering. Using fake accounts, he overwhelmed the original maintainer with feature requests and bug reports, creating pressure to add more help. This tactic secured Jia Tan a significant role in the project.</p><p>In 2023, after nearly two years of contributions, Jia Tan introduced changes in release 5.6.0, including a sophisticated backdoor. This revelation shocked the community, highlighting vulnerabilities in the open-source model.</p><figure class="kg-card kg-image-card kg-card-hascaption"><img src="https://keelancannoo.com/content/images/2024/07/mini_magick20240710-39901-hb611a.jpg" class="kg-image" alt="Audit of Compression Utilities in Response to the XZ Security Incident" loading="lazy" width="2000" height="2800" srcset="https://keelancannoo.com/content/images/size/w600/2024/07/mini_magick20240710-39901-hb611a.jpg 600w, https://keelancannoo.com/content/images/size/w1000/2024/07/mini_magick20240710-39901-hb611a.jpg 1000w, https://keelancannoo.com/content/images/size/w1600/2024/07/mini_magick20240710-39901-hb611a.jpg 1600w, https://keelancannoo.com/content/images/2024/07/mini_magick20240710-39901-hb611a.jpg 2000w" sizes="(min-width: 720px) 720px"><figcaption><i><em class="italic" style="white-space: pre-wrap;">Credit: </em></i><a href="https://x.com/fr0gger_?ref=keelancannoo.com"><i><em class="italic" style="white-space: pre-wrap;">Thomas Roccia&#xA0;</em></i></a><i><em class="italic" style="white-space: pre-wrap;"> for the </em></i><a href="https://x.com/fr0gger_/status/1774342248437813525?ref=keelancannoo.com" rel="noreferrer"><i><em class="italic" style="white-space: pre-wrap;">infographic</em></i></a><i><em class="italic" style="white-space: pre-wrap;"> outlining the XZ Outbreak.</em></i></figcaption></figure><h2 id="impact">Impact</h2><p>The exploitation of CVE-2024-3094 enabled remote code execution (RCE), giving attackers unauthorized access to a very specific set of systems that relied on compromised versions of xzutils. This raised concerns about the integrity and security of these systems, highlighting the need for immediate action.</p><h2 id="response-and-collaboration">Response and Collaboration</h2><p>In response to this critical discovery, Michael Scovetta, a security expert from Microsoft, initiated a community project aimed at conducting a comprehensive audit of various compression utilities. This proactive collaboration aimed to identify similar vulnerabilities in other tools and prevent potential threats before they could be exploited. Participants were encouraged to contribute and <a href="https://cyberstorm.mu/?ref=keelancannoo.com" rel="noreferrer">Cyberstorm.mu</a> eagerly joined the effort to support this important work.</p><h2 id="the-audit-process">The Audit Process</h2><p>The audit involved creating a list of widely used compression libraries including gzip, bzip2 and zip for scrutiny. A spreadsheet was used to track and flag binary test cases needing closer inspection. Detailed examinations of the code and binaries were conducted for these flagged entries and findings were documented to ensure continuous improvement of security.</p><p>The discovery of CVE-2024-3094 serves as a stark reminder of the vulnerabilities in the open-source ecosystem. It underscores the importance of vigilance, robust security measures and collaborative efforts to protect the integrity of essential software tools.</p>]]></content:encoded></item><item><title><![CDATA[How Contributing to Open Source Led Me to Meet a FreeBSD Co-Founder]]></title><description><![CDATA[Diving into open source opens doors to significant learning and growth. Sharing these experiences fuels community engagement and showcases the endless opportunities for innovation and skill development in the open-source world.]]></description><link>https://keelancannoo.com/how-contributing-to-open-source-led-me-to-meet-a-freebsd-co-founder/</link><guid isPermaLink="false">668edb5da267760b89392270</guid><category><![CDATA[Cyberstorm.mu]]></category><category><![CDATA[Events]]></category><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Wed, 12 Jul 2023 19:09:00 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/08/opensource.png" medium="image"/><content:encoded><![CDATA[<h2 id="introduction">Introduction</h2><img src="https://keelancannoo.com/content/images/2024/08/opensource.png" alt="How Contributing to Open Source Led Me to Meet a FreeBSD Co-Founder"><p>The open-source world is a remarkable place where collaboration and innovation thrive. For me, contributing to open-source projects has been a transformative experience, culminating in the incredible opportunity to meet and work alongside one of the original co-founders of FreeBSD, Rodney W. Grimes. This blog post shares my journey through open source, highlighting how contributions opened doors to this unique experience and how <a href="https://cyberstorm.mu/?ref=keelancannoo.com" rel="noreferrer">Cyberstorm.mu</a> facilitated an inspiring talk by Rodney at our university.</p><h2 id="getting-started-with-open-source">Getting Started with Open Source</h2><p>My journey into open source began during my second year at university, driven by a strong interest in networking and software development. My first contributions were a security patch for <a href="https://github.com/libevent/libevent?ref=keelancannoo.com" rel="noreferrer">libevent</a> and Dockerfiles for the <a href="https://github.com/open-quantum-safe?ref=keelancannoo.com" rel="noreferrer">OpenQuantumSafe</a> project. Later, I started working on <a href="https://github.com/FRRouting/frr?ref=keelancannoo.com" rel="noreferrer">FRRouting</a> (FRR) with Loganaden Velvindron and Sarvesh Dindyal. We persevered and started making meaningful contributions focusing on fixing memory leaks.</p><h2 id="the-breakthrough">The Breakthrough</h2><p>As we became more involved in the FRRouting project, we had the incredible opportunity to work under the guidance of Rodney W. Grimes, one of the three original founders of FreeBSD. Collaborating remotely, we continued to identify and fix memory leaks in FRRouting, aiming to improve the software&apos;s performance and stability. This collaboration eventually led to Rodney visiting Mauritius to work with us directly during my final year at university.</p><h2 id="meeting-and-working-with-a-freebsd-co-founder">Meeting and Working with a FreeBSD Co-Founder</h2><p>Meeting Rodney in person was an extraordinary experience. His deep understanding of system internals and network protocols was invaluable. During his visit to Mauritius, we had the opportunity to work together closely. He taught us advanced debugging techniques using GDB and provided hands-on training in software debugging and development practices. His mentorship provided us with insights into best practices and advanced concepts that significantly elevated our technical skills.</p><h2 id="bringing-knowledge-home">Bringing Knowledge Home</h2><p>The impact of this collaboration extended beyond personal growth. Cyberstorm.mu recognized the value of sharing this opportunity with our local community. We organized a talk to disseminate the insights gained. The highlight was inviting Rodney W. Grimes to speak at our university. His talk was a resounding success, drawing a large audience of students, professors and local tech enthusiasts eager to learn from his experiences and insights.</p>
<!--kg-card-begin: html-->
<iframe width="560" height="315" src="https://www.youtube.com/embed/PQ8WyQhkynQ?si=UrpGuxmqke8rZYxt" title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" referrerpolicy="strict-origin-when-cross-origin" allowfullscreen></iframe>
<!--kg-card-end: html-->
<h2 id="the-importance-of-community-and-collaboration">The Importance of Community and Collaboration</h2><p>This journey underscored the power of open-source communities. The collaborative spirit, the willingness to share knowledge and the drive to innovate create an environment where anyone can contribute and grow. Meeting and working with a pioneer like Rodney W. Grimes was a reminder of how much can be achieved through dedication and collaboration.</p><h2 id="conclusion">Conclusion</h2><p>Reflecting on my open-source journey, I realize how contributing to projects like OpenQuantumSafe and FRRouting has profoundly impacted my professional development and community engagement. The opportunity to meet and work with Rodney W. Grimes was a significant milestone, illustrating the incredible possibilities that open-source contributions can offer. I encourage anyone interested in technology to dive into open-source projects&#x2014;it&apos;s a gateway to learning, mentorship from experienced professionals and unique opportunities.</p>]]></content:encoded></item><item><title><![CDATA[Understanding Post-Quantum Cryptography: Securing Tomorrow's Data Today]]></title><description><![CDATA[Quantum computing is set to disrupt our current cryptographic systems, risking our digital security. Explore the urgent need for quantum-resistant algorithms and how we're preparing for this shift. Stay ahead of the curve and secure your data from future quantum threats.]]></description><link>https://keelancannoo.com/understanding-post-quantum-cryptography-securing-tomorrows-data-today/</link><guid isPermaLink="false">66894de4a267760b8939223d</guid><category><![CDATA[Cryptography]]></category><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Tue, 09 May 2023 20:00:00 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/07/_f5bfb848-07b3-43cb-b526-4ecf4a1647f4.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2024/07/_f5bfb848-07b3-43cb-b526-4ecf4a1647f4.jpeg" alt="Understanding Post-Quantum Cryptography: Securing Tomorrow&apos;s Data Today"><p>Public key cryptography is an integral part of our daily internet activities. Whether you&apos;re browsing your favorite website, watching the latest trending series, sending an email, online shopping or social media messaging, it&apos;s there, silently ensuring that your data remains secure. But with the advent of quantum computing, the security provided by traditional public key cryptography is under threat. This blog post aims to shed light on the need for new algorithms and the emergence of post-quantum cryptography.</p><h2 id="objectives">Objectives</h2><ul><li>Gain an understanding of the need for new algorithms</li><li>Understand post-quantum cryptography</li></ul><h2 id="traditional-public-key-cryptography-a-quick-primer">Traditional Public Key Cryptography: A Quick Primer</h2><p>Before we dive into post-quantum cryptography, let&apos;s take a moment to understand the mechanism behind traditional public-key cryptography. What really makes these algorithms secure?</p><p>Traditional public key cryptography relies on two types of mathematical problems: prime integer factorization and the discrete logarithm. These problems are relatively straightforward to compute in one direction but extremely difficult to reverse without specific information.</p><h3 id="prime-integer-factorization">Prime Integer Factorization</h3><p>For example, given two huge prime numbers, it&apos;s easy to compute their product. However, given the product, it is exceedingly difficult to determine the original prime numbers.</p><p>For instance, while it&#x2019;s simple to multiply 661 and 251 to get 165,911, figuring out the prime factors of 165,911 is significantly harder without knowing them in advance. Real-world applications use much larger prime numbers, adding to the complexity.</p><h3 id="discrete-logarithm"><strong>Discrete Logarithm</strong></h3><p>The discrete logarithm problem is another one-way function that is easy to compute but hard to reverse. In the context of cryptography, this often involves a large prime number p and a generator g. Given g and x, where y &#x2261; g<sup>x</sup>&#xA0; mod p, it is easy to compute y. However, given y, it is extremely difficult to determine x.</p><p>For example, if g = 5, p = 23 and y = 8, finding x such that 8 &#x2261; 5<sup>x</sup> mod 23 is a hard problem. In real-world applications, g, p and y are much larger, making the problem even more challenging. In this example, x=6 satisfies 8 &#x2261; 5<sup>6</sup>mod 23.</p><p>In simple terms, public key cryptography relies on these mathematical problems to exchange keys for encryption and to create digital signatures.</p><h2 id="the-quantum-threat">The Quantum Threat</h2><p>If these algorithms have been working fine, what&apos;s the issue? The problem lies in the advancements in quantum computing. With tech giants like IBM, Google and Microsoft investing hundreds of millions of dollars, significant breakthroughs in quantum computing are happening.</p><p>In 1994, the mathematician Peter Shor devised an algorithm that can quickly and efficiently solve the prime factorization and discrete logarithm problems. Fortunately, Shor&#x2019;s algorithm only runs on quantum computers which are still in their infancy. However, it is only a matter of time before powerful enough quantum computers can be built to break the cryptographic algorithms we currently rely on.</p><p>Some might think, &quot;There&#x2019;s time left. Why worry now?&quot;. Unfortunately, that&#x2019;s a misconception. Communications exchanged today can be intercepted and stored, only to be decrypted later when quantum computers become available. There is evidence suggesting that some entities might already be engaging in such activities as revealed by Edward Snowden. This means that today&#x2019;s communications are at risk and need to be secured now to protect future data integrity.</p><h2 id="nist-post-quantum-cryptography-pqc-standardization">NIST Post-Quantum Cryptography (PQC) Standardization</h2><p>Recognizing the impending threat, the National Institute of Standards and Technology (NIST) launched a <a href="https://csrc.nist.gov/projects/post-quantum-cryptography/post-quantum-cryptography-standardization?ref=keelancannoo.com" rel="noreferrer">competition</a> to develop new cryptographic algorithms that are resistant to quantum attacks. Researchers from around the world proposed new algorithms based on different mathematical problems such as lattice-based cryptography, code-based cryptography and others. Over several years and rounds, these algorithms were scrutinized and tested.</p><p>Some of these algorithms, such as CRYSTALS-Kyber and CRYSTALS-Dilithium, have emerged as leading candidates for standardization. While they haven&apos;t been fully standardized yet, they are being actively evaluated and adopted by various organizations. For example, the German Federal Office for Information Security (BSI) and the French ANSSI endorse FrodoKEM for post-quantum security.</p><h2 id="contributions-from-cyberstormmu">Contributions from cyberstorm.mu</h2><p><a href="https://cyberstorm.mu/?ref=keelancannoo.com" rel="noreferrer">Cyberstorm.mu</a>, has been actively working and contributing towards post-quantum algorithms. We have analyzed the impact of post-quantum cryptography in protocols such as QUIC and DNS, and contributed to open-source projects by <a href="https://github.com/open-quantum-safe?ref=keelancannoo.com" rel="noreferrer">OpenQuantumSafe</a>. This hands-on experience has given us unique insights into the practical challenges and solutions for integrating post-quantum cryptography into existing systems.</p><h2 id="global-adoption">Global Adoption</h2><p>The adoption of post-quantum cryptography is gaining momentum globally. Singapore and the White House are among the entities that have embraced these new algorithms, signaling a shift towards quantum-resistant security measures.</p><h2 id="transitioning-to-post-quantum-cryptography">Transitioning to Post-Quantum Cryptography</h2><p>It&#x2019;s important to note that post-quantum cryptographic algorithms have not yet withstood the test of time. As such, the transition is happening in a hybrid manner where traditional cryptographic methods are used alongside post-quantum algorithms to ensure a smooth and secure shift.</p><h2 id="conclusion">Conclusion</h2><p>The era of quantum computing is on the horizon, bringing with it both incredible opportunities and significant challenges. While traditional public key cryptography has served us well, the advent of quantum computers necessitates the development and adoption of new cryptographic algorithms. By understanding and preparing for these changes now, we can ensure that our data remains secure in the future.</p><p>As we continue to explore the realm of post-quantum cryptography, it&apos;s crucial to stay informed and proactive. After all, securing tomorrow&apos;s data today is not just a necessity; it&apos;s a responsibility.</p>]]></content:encoded></item><item><title><![CDATA[Overview and Dissection of TLS 1.3 Handshake using Wireshark]]></title><description><![CDATA[Demystify TLS 1.3 with Wireshark! Explore handshake intricacies, decrypt traffic, and grasp secure communication nuances in under 6 minutes. Unveil TLS evolution now!]]></description><link>https://keelancannoo.com/overview-and-dissection-of-tls-1-3-handshake-using-wireshark/</link><guid isPermaLink="false">657d8a425dc33f27bb98cb2f</guid><category><![CDATA[Wireshark]]></category><category><![CDATA[TLS]]></category><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Thu, 21 Apr 2022 17:24:00 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/06/_b08ec59f-9f77-44be-8123-09001b7c4bbd.jpeg" medium="image"/><content:encoded><![CDATA[<h2 id="objectives">Objectives</h2><img src="https://keelancannoo.com/content/images/2024/06/_b08ec59f-9f77-44be-8123-09001b7c4bbd.jpeg" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark"><p>By the end of this guide, you will</p><ul><li>Have a basic understanding of how TLS 1.3 handshake works</li><li>Be able to use <code>curl</code> to make request and get additional information such as the IP address of web server, TCP port, etc</li><li>Be able to capture and filter packets using wireshark</li><li>Be able to log pre-master secrets and use them to decrypt TLS traffic</li></ul><h2 id="overview">Overview</h2><p>TLS 1.3 is a major overhaul of the TLS protocol with enhanced speed, improved efficiency and better security. Ciphers and algorithms which are considered weak and insecure have been removed in the latest TLS version. These include RSA key exchange, 3-DES, DES and RC4 stream cipher among many more. As a result, the number of cipher suites dropped from 37 to 5. RSA key exchange was removed since it does not provide forward secrecy and is vulnerable to specific attacks such as Bleichenbacher attacks. This means that if the private key of the server is leaked in the future, the past ciphertexts can be decrypted. &#xA0;In TLS 1.3 all key exchange algorithms were removed except for Diffie-Hellman (DH).</p><p>Additionally TLS 1.3 requires servers to sign the entire handshake including the cipher negotiation unlike TLS 1.2. Furthermore, TLS 1.3 takes only one round trip to complete a handshake whereas TLS 1.2 takes two round trips. This is possible because the client guesses the parameters the server will use and can therefore send the DH key share in the first handshake message itself (Client Hello).&#xA0;</p><h2 id="prerequisites">Prerequisites</h2><ul><li>Wireshark</li></ul><pre><code class="language-bash">sudo apt install wireshark</code></pre><ul><li>Curl</li></ul><pre><code class="language-bash">sudo apt install curl</code></pre><h2 id="capturing-the-packets-and-ssl-key-log">Capturing the packets and SSL key log</h2><ol><li>Set SSLKEYLOGFILE Environment Variable</li></ol><p>Open terminal and set the SSLKEYLOGFILE environment variable to the file path where you want to save the key log. Ensure you have write permission to the location you specify. Run:&#xA0;<br></p><pre><code class="language-bash">export SSLKEYLOGFILE=$HOME/Desktop/keylog</code></pre><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Note:</strong></b> A key log is a log/record of values which are used to generate TLS session keys.&#xA0;</div></div><ol start="2"><li>Open Wireshark in another tab in terminal and start capture</li></ol><pre><code class="language-bash">sudo wireshark</code></pre><p>To open wireshark, run&#xA0;<code>sudo wireshark</code>.&#xA0;&#xA0;Double click on the appropriate network interface to start capture. Alternatively, you can right click and choose &quot;start capture&quot;.</p><ol start="3"><li>Use curl to make request to a server which supports TLS 1.3</li></ol><p>Ensure that you are in the same terminal in which you set the SSLKEYLOGFILE environment variable. You can run&#xA0;<code>echo $SSLKEYLOGFILE</code>&#xA0; to ensure that it is set.<br><br>Note: If you want to access the website through chrome (<code>google-chrome https://google.com</code>) or firefox (<code>firefox https://google.com</code>), make sure you open the web browser from that terminal.<br><br>Close all browsers and run&#xA0;<code>curl https://google.com -v</code>.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-15-53-01.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="536" height="129"></figure><p>The output will display the IP address of the web server (e.g., 172.217.170.174).</p><p>At this point, the key log file you specified in the first step should have been automatically created and written to.&#xA0;</p><ol start="4"><li>Stop capture and filter the packets</li></ol><p>Use the following filter to display only packets involving the web server of that website and which uses TLS protocol:</p><pre><code class="language-wireshark">tls &amp;&amp; ip==172.217.170.174</code></pre><h2 id="analyzing-the-packets">Analyzing the packets</h2><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-15-35-32.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="838" height="244" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2022-04-21-15-35-32.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-15-35-32.png 838w" sizes="(min-width: 720px) 720px"></figure><p>At first glance, it can be seen that:</p><ol><li>The client first sends Client Hello to the server</li><li>The server responds with Server Hello followed by Change Cipher Spec unlike TLS 1.2.</li><li>The server also sends some encrypted application data along.</li><li>The client sends a Change Cipher Spec to the server together with some encrypted data. The rest of the conversation is encrypted as well.</li></ol><p>Now we will use the client key log file to decrypt the TLS traffic.On Wireshark, go to Edit -&gt;Preferences-&gt;Protocols-&gt;TCP. Then change the (Pre)-Master-Secret log filename to the location of the key log file and click OK.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-17-07-23.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="711" height="513" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2022-04-21-17-07-23.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-17-07-23.png 711w"></figure><p>Wireshark will convert the pre-master to the master key and decrypt all the encrypted application data of the conversation.</p><h3 id="making-sense-of-the-tls-13-handshake"><strong>Making sense of the TLS 1.3 handshake</strong></h3><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-17-12-14.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="1037" height="244" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2022-04-21-17-12-14.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/06/Screenshot-from-2022-04-21-17-12-14.png 1000w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-17-12-14.png 1037w" sizes="(min-width: 720px) 720px"></figure><h4 id="1-client-sends-client-hello">1.&#xA0; Client sends Client Hello</h4><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-18-18-00.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="640" height="423" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2022-04-21-18-18-00.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-18-18-00.png 640w"></figure><p>It&apos;s odd to see the client request a TLS 1.2 handshake. This happens because TLS 1.3 is a victim of protocol ossification. A large number of servers implemented TLS version negotiation wrongly. When presented by the client with a version of TLS higher than what they support, those servers disconnect instead of replying with the newest TLS version which both of them support.&#xA0;</p><p>In order to be compatible with previous versions, TLS 1.3 disguises itself as a TLS 1.2 handshake and introduces extensions to add functionality to it.&#xA0;</p><p>The latest TLS version which the client supports is actually found in the supported_versions extension.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-21-47-46.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="387" height="92"></figure><p>The Client Hello includes the latest version the TLS version, a list of Cipher Suites the client supports, the client random and the key_share extension.</p><p>The key_share contains the client parameter for key exchange.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-22-08-16.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="695" height="143" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2022-04-21-22-08-16.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-22-08-16.png 695w"></figure><p>The client guesses which key exchange method the server is likely to choose and its key share for that particular method is sent. If ever the guess is wrong, the server sends a Retry Hello Request Message.</p><h4 id="2-server-sends-server-hello-and-change-cipher-spec"><strong>2.&#xA0; Server sends Server Hello and Change Cipher Spec</strong></h4><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-22-01-11.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="369" height="62"></figure><p>In the supported_versions extension, the server confirms the TLS version.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-22-04-12.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="710" height="122" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2022-04-21-22-04-12.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-22-04-12.png 710w"></figure><p>The key_share extension has the selected curve name and server key exchange parameter.&#xA0;</p><p>The Server Hello also contains the server random.</p><p>At this point, the server has the client random, the server random and both the key shares of the client and the server. Therefore the server can generate the master secret from which the session key is calculated and can start encrypting messages. So it sends Change Cipher Spec to let the client knows that it has generated the session key and is switching to an encrypted environment.</p><h4 id="3-server-sends-encrypted-extensions-certificate-certificate-verify-and-finished"><strong>3.&#xA0; Server sends Encrypted Extensions, Certificate, Certificate Verify and Finished</strong></h4><p>All those messages are encrypted using the session key and sent to the client.&#xA0;</p><p>The Encrypted Extensions message contains additional extensions which can be protected but which are not needed to establish the encrypted connection. If there is no such extension, the message is empty.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2022-04-21-22-41-58.png" class="kg-image" alt="Overview and Dissection of TLS 1.3 Handshake using Wireshark" loading="lazy" width="490" height="163"></figure><p>Certificate contains the server&apos;s digital certificate and any per-certificate extensions.&#xA0;</p><p>Certificate verify : the entire handshake is signed using the private key of the server corresponding to the public key in the certificate message.&#xA0;</p><p>Finished contains a MAC to ensure integrity of handshake. It is sent to indicate that handshake is done on the server side.</p><h4 id="4-client-sends-change-cipher-spec-and-finished"><strong>4.&#xA0; Client sends Change Cipher Spec and Finished.</strong></h4><p>Change Cipher Spec is sent to let the server know that the client has generated the session key and is switching to an encrypted environment.&#xA0;</p><p>Finished contains a MAC to ensure integrity of handshake. It is sent to indicate that handshake is done on the client side.</p>]]></content:encoded></item><item><title><![CDATA[The Beginner’s Guide To Git & GitHub]]></title><description><![CDATA[Git and GitHub essentials! Navigate version control, SSH keys and fundamental commands effortlessly. Your coding journey starts here. Let's code together!]]></description><link>https://keelancannoo.com/the-beginners-guide-to-git-github/</link><guid isPermaLink="false">657cb6455dc33f27bb98cb18</guid><category><![CDATA[Github]]></category><dc:creator><![CDATA[Keelan Cannoo]]></dc:creator><pubDate>Wed, 29 Dec 2021 17:01:00 GMT</pubDate><media:content url="https://keelancannoo.com/content/images/2024/06/_be699c60-5dbf-432a-871a-4e4468ee7eef.jpeg" medium="image"/><content:encoded><![CDATA[<img src="https://keelancannoo.com/content/images/2024/06/_be699c60-5dbf-432a-871a-4e4468ee7eef.jpeg" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub"><p>In the fast-paced world of software development, the absence of a reliable version control system can lead teams down a chaotic path. Without tools like Git, teams often resort to ad-hoc methods such as sharing files via email or using shared network drives. Unfortunately, this approach breeds confusion, conflicts, and the ever-present risk of losing track of critical changes. Sounds familiar, huh?</p><h2 id="objectives"><strong>Objectives</strong></h2><p>By the end of this guide, you will:</p><ol><li>Gain a comprehensive understanding of basic Git terminology.</li><li>Learn how to generate and use SSH keys for secure authentication with GitHub using openSSH.</li><li>Acquire the necessary skills to navigate GitHub efficiently and effectively.</li></ol><h2 id="prerequisites">Prerequisites</h2><p>Before proceeding, make sure you have the following prerequisite:</p><ul><li>Git</li></ul><pre><code class="language-bash">sudo apt install git</code></pre><ul><li>A <a href="https://github.com/signup?ref=keelancannoo.com" rel="noreferrer">Github</a> account&#xA0;</li><li>OpenSSH</li></ul><pre><code class="language-bash">sudo apt install openssh-client</code></pre><h2 id="what-is-git">What is Git?&#xA0;</h2><p>Git is an open source version control system for tracking changes to files over time. With Git, developers can create snapshots of their code at different points making it easy to revert to previous versions if needed. It also helps programmers to collaborate on projects and synchronize their work.</p><h2 id="what-is-github">What is GitHub?&#xA0;</h2><p>GitHub is an online code hosting platform for software development projects that use git. It enables programmers to share their code files and collaborate with other developers from anywhere around the globe.&#xA0;GitHub offers a rich set of collaboration tools including pull requests, issue tracking and project management features. Additionally, GitHub integrates seamlessly with CI/CD pipelines automating build, test and deployment processes for projects.</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Note:&#xA0;</strong></b>Git isn&apos;t limited to GitHub! There are plenty of other platforms where you can leverage Git&apos;s power. Other options like GitLab, Bitbucket, SourceForge, GitKraken and Azure DevOps exist.</div></div><h2 id="git-terminology">Git Terminology</h2><h3 id="repository"><strong>Repository</strong></h3><p>It is a directory for storing all the files and folders related to a project as well as the history of changes made to the files of the project. A repository residing a local machine is called local repository whereas a repository that is hosted on a server is known as a remote repository.&#xA0;&#xA0;&#xA0;</p><h3 id="commits"><strong>Commits</strong></h3><p>Commits are simply changes you have made in your branch since your last commit. A commit is basically a snapshot of your repository<em>.</em></p><h3 id="the-staging-environment"><strong>The staging environment</strong></h3><p>Also known as the index, it is an intermediate area where the files that are going to be part of the next commit are stored. For a file to be part of a commit, you must first add it to the staging environment.</p><h3 id="branch"><strong>Branch</strong></h3><p>A branch is a parallel version of your repository that allows you to work on new features or fixes without affecting the primary or master branch. Branches can be merged back into the main branch when changes are ready to be published.</p><h3 id="master"><strong>Master</strong></h3><p>It is the repository&#x2019;s primary/default branch. In recent years, GitHub has switched to using &apos;main&apos; as the default branch name in repositories, reflecting a more inclusive and neutral terminology.</p><h3 id="push"><strong>Push</strong></h3><p>Push means uploading your committed changes from your local repository to a remote one (in this case GitHub).&#xA0;&#xA0;</p><h3 id="pull"><strong>Pull</strong></h3><p>Pull means downloading changes, that you or other people have made, from a remote repository to the repository on your local machine.</p><h3 id="pull-request"><strong>Pull request</strong></h3><p>Pull request is a way to notify the repository&#x2019;s maintainers to review the commits you made to their code and, if acceptable, merge the changes into their master branch.&#xA0;</p><h3 id="upstream-branch"><strong>Upstream branch</strong></h3><p>A local branch can be made to track a remote branch so that pushing and pulling become easier and less susceptible to mistakes. In such a case, the local branch is known as the tracking branch and the remote branch is known as the upstream branch.</p><h2 id="generating-and-testing-an-ssh-key">Generating and testing an SSH key</h2><p>Using SSH keys provides a secure and convenient way to access GitHub repositories without the need to enter a username and password for each interaction. It&apos;s recommended for anyone working with GitHub repositories to ensure secure access and protect sensitive information.</p><p>1.&#xA0; To generate key using the Ed25519 algorithm, run the following command in your terminal:</p><pre><code class="language-bash">ssh-keygen -t ed25519 -C &quot;Enter your GitHub email address here&quot;</code></pre><p>When prompted to enter a file to which to save key, press Enter to accept default file location. You may enter a passphrase of your choice if you wish.&#xA0;</p><div class="kg-card kg-callout-card kg-callout-card-blue"><div class="kg-callout-emoji">&#x1F4A1;</div><div class="kg-callout-text"><b><strong style="white-space: pre-wrap;">Note: </strong></b>Ed25519 is used as it is faster and more secure than DSA, RSA and ECDSA. It is resilient to hash function collisions and is also strongly resistant to side-channel attacks. Moreover they provide the same level of security as the forementioned algorithms at a much shorter key length.&#xA0;</div></div><p>&#xA0;2.&#xA0; Start the ssh-agent with the following command:</p><pre><code class="language-bash">eval &quot;$(ssh-agent -s)&quot;&#xA0;</code></pre><p>ssh-agent is used to manage SSH keys and to keep track of passphrases.&#xA0;Read more about ssh-agent <a href="https://smallstep.com/blog/ssh-agent-explained/?ref=keelancannoo.com" rel="noreferrer">here</a>.</p><p>&#xA0;3.&#xA0; Add SSH key to ssh-agent:</p><pre><code class="language-bash">&#xA0;ssh-add&#xA0; ~/.ssh/id_ed25519</code></pre><p>&#xA0;4.&#xA0; Copy your SSH public key to your clipboard using one of the following methods:&#xA0;</p><p><strong>i. Displaying the SSH Public Key:</strong>&#xA0;To view your SSH public key, use the command:</p><pre><code class="language-bash">cat ~/.ssh/id_ed25519.pub</code></pre><p>When copying the key, ensure that you do not introduce any additional characters, spaces or new lines.</p><p><strong>ii. Copying the SSH Public Key to Clipboard:</strong>&#xA0;If you have the&#xA0;<code>xclip</code>&#xA0;utility installed, you can copy the public key directly to your clipboard with the following command:</p><pre><code class="language-bash">xclip -sel clip ~/.ssh/id_ed25519.pub </code></pre><p>5.&#xA0; Navigate to SSH and GPG keys on GitHub account settings.</p><p>i. Click on your profile picture on the top right corner of GitHub. Then click on Settings.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-28-20-52-50.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="179" height="487"></figure><p>ii. Click on SSH and GPG keys.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-28-21-08-05.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="303" height="540"></figure><p>6.&#xA0; Click new SSH key.</p><p>7.&#xA0; Enter a descriptive title for your key.</p><p>8.&#xA0; Paste your public key in the key field.</p><p>9.&#xA0; Click Add SSH.</p><p>10.&#xA0; If prompted, enter password.&#xA0;</p><p>11.&#xA0; To verify your SSH key is properly authenticated to GitHub, run:</p><pre><code class="language-bash">ssh&#xA0;-T git@github.com</code></pre><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-11-44-59.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="824" height="40" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-11-44-59.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-11-44-59.png 824w" sizes="(min-width: 720px) 720px"></figure><h2 id="basic-commands">Basic commands</h2><p><strong>1.&#xA0; Initializing a Git Repository</strong></p><p>Whether you&apos;re starting a new project or want to add version control to an existing one, initializing a Git repository is the first step.</p><pre><code class="language-bash">mkdir myProject
cd myProject
git init</code></pre><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-12-32-55.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="767" height="259" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-12-32-55.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-12-32-55.png 767w" sizes="(min-width: 720px) 720px"></figure><p>To initialize a git repository, run the&#xA0;<code>git init</code>&#xA0;command in the project directory as shown above.&#xA0;</p><p>When you run&#xA0;<code>git init</code>, Git creates a hidden directory called&#xA0;<code>.git</code>&#xA0;in your project&#x2019;s root folder. This directory houses all the necessary metadata for version control.</p><p><strong>2. Managing Files in Git</strong></p><p>After initializing the repository, you can start adding and managing files.</p><pre><code class="language-bash">touch newFile.txt&#xA0;
git status</code></pre><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-12-45-03.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="711" height="207" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-12-45-03.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-12-45-03.png 711w"></figure><p><code>git status</code>&#xA0;displays the current state of the working directory and the staging area. It shows which changes have been staged, which changes are not yet staged for commit and which files are not being tracked by Git.&#xA0;</p><ul><li><code>touch newFile.txt</code>: Creates a new file within the project.</li><li><code>git status</code>: Checks the current status of the repository and lists any changes.</li></ul><p><strong>3.&#xA0; &#xA0;Staging Changes</strong></p><p>Before committing changes, you need to stage them for inclusion in the next commit.</p><pre><code class="language-bash">git add .&#xA0;</code></pre><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-13-14-16.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="464" height="175"></figure><p><code>git add .</code>&#xA0;stages all (new, modified, deleted) files in the current directory.</p><p><code>git add -A</code>&#xA0;stages all(new,modified, deleted) files in the entire working tree and not just the current directory.</p><ul><li><code>git add .</code>: Adds all new, modified and deleted files in the current directory and its subdirectories to the staging area.</li><li><code>git add -A</code>: Adds all changes, including deletions and file creations, to the staging area.</li></ul><p><strong>4.&#xA0; Configuring Git Identity</strong></p><p>Configure your username and email address for Git commits.</p><pre><code class="language-bash">git config --global user.name &quot;Enter your name here&quot;
git config --global user.email &quot;Enter your email here&quot;&#xA0;</code></pre><p>These commands set the username and email that are associated with your Git commits.</p><p><code>--global</code>&#xA0;specifies that the configuration is applied to the entire system and and is not specific to just that repository. If you want to use another username and email for specific projects in the future, you can run the command without the&#xA0;<code>--global</code>&#xA0;option when you are in those projects.&#xA0;</p><p>Run&#xA0;<code>git config user.name</code>&#xA0;and&#xA0;<code>git config user.email</code>&#xA0;to view your Git username and email respectively.</p><p><strong>5.&#xA0; Creating commits</strong></p><p>To create a commit, use the following command:</p><pre><code class="language-bash">git commit -m &quot;Your message about the commit&quot;&#xA0;</code></pre><p>Make sure that the message is meaningful and related to the commit!</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-14-42-01.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="508" height="73"></figure><p>&#xA0;<strong>6.&#xA0; Branching out</strong></p><p>Branching allows you to work on different features or versions of your project concurrently. Here are some common commands for managing branches:</p><p>To create a new branch called <code>newBranch</code>:</p><pre><code class="language-bash">git branch newBranch</code></pre><p>To switch to the branch called <code>newBranch</code>:</p><pre><code class="language-bash">git checkout newBranch</code></pre><p>To create a new branch called <code>newBranch</code> and switch to it in a single command:</p><pre><code class="language-bash">git checkout -b newBranch</code></pre><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-14-52-54-1.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="492" height="95"></figure><p><code>git branch</code>&#xA0;displays all the branches and shows which branch you are currently on.</p><p>Using branches helps keep your work organized and enables easier collaboration by isolating different lines of development.</p><p><strong>7.&#xA0; Create a new GitHub repository</strong>&#xA0;</p><p>Select new repository from the&#xA0;<strong>+</strong>&#xA0;dropdown menu next to your profile picture in the top right corner.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-07-09.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="1273" height="207" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-15-07-09.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/06/Screenshot-from-2021-12-29-15-07-09.png 1000w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-07-09.png 1273w" sizes="(min-width: 720px) 720px"></figure><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-14-58.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="816" height="617" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-15-14-58.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-14-58.png 816w" sizes="(min-width: 720px) 720px"></figure><p>Enter a name for your repository and click Create repository.&#xA0;</p><p><strong>8.&#xA0; Linking Your Local Repository to GitHub</strong></p><p>After ensuring that SSH is selected, go to your GitHub repository page and copy the SSH URL of your repository. It should look something like <code>git@github.com:username/repository.git</code>.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-27-25.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="1219" height="543" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-15-27-25.png 600w, https://keelancannoo.com/content/images/size/w1000/2024/06/Screenshot-from-2021-12-29-15-27-25.png 1000w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-27-25.png 1219w" sizes="(min-width: 720px) 720px"></figure><pre><code class="language-bash">git remote add origin git@github.com:username/repository.git</code></pre><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-45-03.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="768" height="170" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-15-45-03.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-15-45-03.png 768w" sizes="(min-width: 720px) 720px"></figure><p>This command creates an entry in your Git configuration that associates the name <code>origin</code> with your GitHub repository URL. In this case the name is <code>origin</code>. However, feel free to rename it as you wish.</p><pre><code class="language-bash">git remote -v</code></pre><p>This command will list the remote repositories linked to your local repository, showing the name <code>origin</code> and the corresponding URL.</p><p><strong>9.&#xA0; Creating a pull request</strong></p><p>Submit changes to other projects by creating pull requests on GitHub. This allows project maintainers to review and merge your changes.</p><p><strong> Fork the repository:</strong></p><p>If you do not have write access to the repository (i.e., it belongs to another person or organization), you need to create a personal copy of the repository by forking it.</p><ul><li>Navigate to the repository you want to contribute to on GitHub.</li><li>Click the &quot;Fork&quot; button at the top-right corner of the page to create a copy of the repository under your GitHub account.</li><li>Clone your forked repository to your local machine: </li></ul><pre><code class="language-bash">git clone https://github.com/your-username/repository-name.git
cd repository-name</code></pre><p><strong>To create a pull request, follow these steps:</strong></p><p>i.&#xA0; Switch to the branch where you have committed your changes.</p><pre><code class="language-bash">git checkout newBranch</code></pre><p>ii.&#xA0; &#xA0;Push the branch to remote repository.&#xA0;&#xA0;</p><pre><code class="language-bash">git push -u origin newBranch</code></pre><p>This command uploads your branch to the remote repository. The <code>-u</code> option sets the upstream branch, establishing a tracking relationship between your local branch and the remote branch.</p><p>iii.&#xA0; Open your GitHub repository in a web browser and click Compare and pull request.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-16-59-01.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="932" height="369" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-16-59-01.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-16-59-01.png 932w" sizes="(min-width: 720px) 720px"></figure><p>iv.&#xA0; Create pull request.</p><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-17-16-37.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="902" height="575" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-17-16-37.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-17-16-37.png 902w" sizes="(min-width: 720px) 720px"></figure><p><strong>10.&#xA0; Merging a Pull Request</strong></p><p>Merge branches to incorporate changes from one branch into another.&#xA0;</p><ul><li><strong>Open the Pull Request</strong>: Go to your GitHub repository and navigate to the &quot;Pull requests&quot; tab.</li><li><strong>Select the Pull Request</strong>: Click on the pull request you want to merge.</li><li><strong>Merge the Pull Request</strong>:Review the changes one last time to ensure everything is correct.Click the green &quot;Merge pull request&quot; button.Confirm the merge by clicking the &quot;Confirm merge&quot; button.</li></ul><figure class="kg-card kg-image-card"><img src="https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-17-00-26.png" class="kg-image" alt="The Beginner&#x2019;s Guide To Git &amp; GitHub" loading="lazy" width="922" height="619" srcset="https://keelancannoo.com/content/images/size/w600/2024/06/Screenshot-from-2021-12-29-17-00-26.png 600w, https://keelancannoo.com/content/images/2024/06/Screenshot-from-2021-12-29-17-00-26.png 922w" sizes="(min-width: 720px) 720px"></figure><p><strong>11.&#xA0; Pulling changes</strong></p><p>To update your local repository with changes from the remote repository, follow these steps:</p><ol><li><strong>Switch to the <code>master</code> Branch</strong>: Ensure you are on the <code>master</code> branch or the main branch you want to update.</li></ol><pre><code class="language-bash">git checkout master</code></pre><ol start="2"><li><strong>Pull the Changes</strong>: Fetch and merge changes from the remote <code>master</code> branch into your local <code>master</code> branch.</li></ol><pre><code class="language-bash">git pull origin master</code></pre><p>If you have set the upstream branch, you can simply run <code>git pull</code>.</p><p><strong>12.&#xA0; Merging two branches on your local repository</strong></p><p>To merge changes from one branch into another:</p><p>i.&#xA0; Check out the target branch that you want to merge into.</p><pre><code class="language-bash">git checkout targetBranch</code></pre><p>ii. <strong>Merge the Other Branch</strong>: Merge the changes from the specified branch into your current branch.</p><pre><code class="language-bash">git merge branchX</code></pre><p><strong>13.&#xA0; Viewing all commits made to a repository</strong>&#xA0;</p><p>View the commit history to understand the evolution of your project.</p><pre><code class="language-bash">git log</code></pre>]]></content:encoded></item></channel></rss>