<?xml version="1.0" encoding="UTF-8"?><?xml-stylesheet href="/rss.xsl" type="text/xsl"?><rss version="2.0" xmlns:content="http://purl.org/rss/1.0/modules/content/"><channel><title>Fotis Alexandrou - Software Engineer</title><description>Personal Blog &amp; Work Samples. Technical posts, tutorials &amp; personal opinions about the Developer&apos;s life</description><link>https://falexandrou.dev</link><item><title>How to prepare your web application for the cloud</title><link>https://falexandrou.dev/posts/2021-02-06-how-to-prepare-your-web-application-for-the-cloud</link><guid isPermaLink="true">https://falexandrou.dev/posts/2021-02-06-how-to-prepare-your-web-application-for-the-cloud</guid><description>Structuring your web application for the cloud, some things to consider when starting out</description><pubDate>Sat, 06 Feb 2021 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Crafting a web application that is able to run in the cloud, is a bit different than an application that is built to run on a single web server and some developers learn it the hard way. In order to avoid last-minute surprises and be well prepared for a cloud migration, here&apos;s a list of the things you need to consider while you’re still developing the app.&lt;/p&gt;
&lt;h3&gt;Develop a mindset first&lt;/h3&gt;
&lt;p&gt;The first and probably the most important thing you need to develop, is a cloud mindset. We need to think that the app is always running in a distributed way, across multiple, disposable instances, even if you plan to deploy it on a single server with everything bundle in it. This way, deploying or migrating to the cloud will be a much easier process, that requires minimal effort. Notice the word &lt;strong&gt;disposable&lt;/strong&gt; we used to describe our instances; this is going to be key for the next things we need to consider.&lt;/p&gt;
&lt;p&gt;You also need to keep in mind that hacky solutions like connecting via SSH to run a script, or directly updating the database are off the table as well (it should be off-the table anyway) since, in our theoretical example, we have 30 server instances, the databases are sharded and there is a cluster of cache servers. We can’t &quot;quickly update&quot; our CSS files or upload a file via FTP either, since every file is stored in an Object store which doesn’t support manual updates and served through a Content Delivery Network (CDN) which means that the original file is cached across the globe.&lt;/p&gt;
&lt;p&gt;Sounds complicated? Well, it&apos;s a different mindset, which means that it might sound complicated at first but it would then become second nature to you and the way you craft your applications. Not having the server for granted and keeping in mind that all services are external and distributed is the key concept. Now let’s break it down into smalller parts.&lt;/p&gt;
&lt;h3&gt;File Storage&lt;/h3&gt;
&lt;p&gt;Media files, user uploads, static assets and everything in between, can only live on the server for a short period of time. The storage on cloud instances is usually ephemeral which means that whenever the instance gets terminated, the disks are terminated as well. Now, there is a way to add persistent volumes even elastic file systems that scale to your needs, but this comes with a cost and it’s not the common case for a web application that say, handles user uploads.&lt;/p&gt;
&lt;p&gt;Most cloud providers like AWS, DigitalOcean, Azure and others, use something called Object Storage, which can be eloquently described as &quot;a bucket of files&quot;. Such buckets can contain media files, the application’s static files, PDF documents or even larger files that need to be downloaded by multiple users simultaneously.&lt;/p&gt;
&lt;p&gt;The two common patterns used to upload files to an Object Store, are either a) directly uploading the files when the user has uploaded them on the app via an API, an SDK or a framework library, or b) syncing the server’s file storage with the object store and deleting the files afterwards, with (a) being the common case and best practice and (b) only being used for cloud migrations, where we need to migrate files that the application used up to now, to the cloud.&lt;/p&gt;
&lt;p&gt;Every object (file) in the object store, comes with its own permissions and path, as it would in a plain server, with the only exception that the path is now a URL so, instead of storing a file path in the database, you can now store the full URL for the file specified.&lt;/p&gt;
&lt;p&gt;A minor caveat here is that if your files need to be public and directly accessed by the users frequently, an object store might prove to be slow, which means that you need to serve the files via a CDN, that &quot;sits&quot; in front of the object store and heavily caches the files until they get modified or said cache expires.&lt;/p&gt;
&lt;p&gt;Static assets, can be stored in the object store as well, since the web servers may hold different file paths, depending on your build process, but what needs to be taken into account here, is that since they’re stored in the object store and served by a CDN, there is heavy caching for them. The most popular solution to this issue, is to have a predictable file hash, which is generated by a build tool so that whenever the application serves a different file name, the CDN will retrieve this file from the object store, and cache it globablly all over again.&lt;/p&gt;
&lt;h3&gt;Session storage&lt;/h3&gt;
&lt;p&gt;Storing the users’ sessions in the server is another cloud anti-pattern. In our theoretical example where our application is spawned across 30 web server instances, we can’t have the sessions stored there because the user might be directed to a different instance upon every subsequent request. Someload balancers, mitigate this issue by using &quot;sticky sessions&quot; which is an option that can be enabled and instructs the load balancer to use the same web server instance upon subsequent request but, this option comes with a hidden performance cost.&lt;/p&gt;
&lt;p&gt;The actual solution to the session issue, is to have a centralized cache storage that is being used for storing the sessions, something like Redis or Memcache. Luckily, all major frameworks and languages have session storage in a separate store built-in, enabling the web server to forget all about session management, hence making our app cloud-friendly.&lt;/p&gt;
&lt;h3&gt;Retrieving &amp;amp; storing information in cache&lt;/h3&gt;
&lt;p&gt;Speaking of cache, we can’t really have the cache as a software dependency in our web server, simply because our web servers are too many and the users request might hit a random server where the cache item is missing. Again, a centralized cache store, where we store and retrieve items via TCP is key here. This way, our cache service can scale up or down, depending on our caching needs and our items are available even if the web server instance gets replaced.&lt;/p&gt;
&lt;p&gt;If you find yourself in need of having more control of the cache server though, for example if you force-empty your cache periodically or whenever your database schema changes, you might need to do that programmatically by creating a few scripts that run on your deployment tool.&lt;/p&gt;
&lt;h3&gt;Scheduled jobs&lt;/h3&gt;
&lt;p&gt;Scheduled jobs (eg. Cron Jobs) can run on your web serve or dedicated worker instances that only do that. The important thing to consider here, is that the worker instances can scale as well. The problem that arises now that our theoretical scenario has 15 worker instances, is that we don’t want them to proces the same data over and over again because a) it would be a waste of resources and b) we might run into problems. Think for example a scheduled job that processes some data, then emails the users about the outcome. We wouldn’t want our users to receive the same email 15 times, do we?&lt;/p&gt;
&lt;p&gt;There are two possible solutions to this issue: first would be a mutually exclusive lock and second would be a queue. These two approaches might sound similar but they aren’t really. During the first approach, we &quot;lock&quot; the data that is to be processed, eg. by updating a boolean flag in the database so that the next worker process that runs, excludes this data from what is to be proccessed by it and so on and so forth.&lt;/p&gt;
&lt;p&gt;The second option, is to have a queue (for example a First-in-first-out queue) where we enqueue the data that is required by a job to run (by using some central store like Redis), then the worker processes that start, retrieve and delete this data from the queue so that it’s only available to the specific process.&lt;/p&gt;
&lt;p&gt;One caveat here, is failure strategy, which means that whenever a worker fails for some reason, they need to somehow handle that, so that no job (or data) gets lost. Practically, you’d have to enquque the message with its original payload again so that another worker picks it up and/or notify the team that something went wrong.&lt;/p&gt;
&lt;h3&gt;Database migrations&lt;/h3&gt;
&lt;p&gt;In the beginning of this article, we mentioned that we can’t perform manual database updates, an action which is an anti-pattern in general, not only cloud-based applications. In cloud environments however, the database might have replicas or it can be sharded, making a manual update somehow more difficult.&lt;/p&gt;
&lt;p&gt;The solution is to treat every database update as a code change, meaning that your application should leverage database migrations, scripts which handle database versioning in order to update its data and schema.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Crafting a web application for a cloud-based environment relies mainly on a different mindset which forgets all about &quot;the server&quot; and thinking about &quot;the services&quot;. Most frameworks and tools these days make our lives easier by embedding cloud tools in our work flow however, we as developers have the responsibility to enable these technologies and use them to the benefit of our users and ourselves.&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>What to consider when choosing a new tech stack</title><link>https://falexandrou.dev/posts/2021-01-26-choosing-the-stack-for-a-new-project</link><guid isPermaLink="true">https://falexandrou.dev/posts/2021-01-26-choosing-the-stack-for-a-new-project</guid><description>Selecting a technology stack is hard, and requires serious thought, so let&apos;s break down the major things you need to consider</description><pubDate>Tue, 26 Jan 2021 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When it comes to creating a new stack, all options are on the table. Some people find it as a fine opportunity to learn a new stack, others prefer stuff they&apos;re already familiar with, some others prefer to distribute their codebase in micro-services consisting of various micro-stacks. This process, has triggered a lot of flame wars on Twitter and offices around the world but, believe it or not, performance benchmarks and your peers opinions are not the only metrics you can take into account.&lt;/p&gt;
&lt;p&gt;Selecting a technology stack is hard, and requires serious thought, so let&apos;s break down the major things you need to consider before creating that git repository and we’ll do that by going through my professional experience, mistakes and flame-wars I had been a part of, so that you don&apos;t have to.&lt;/p&gt;
&lt;p&gt;&amp;lt;figure class=&quot;w-full&quot;&amp;gt;
&amp;lt;img class=&quot;h-auto max-w-full rounded-lg&quot; src=&quot;/images/tt.jpg&quot; alt=&quot;image description&quot; alt=&quot;Tech team&quot;&amp;gt;
&amp;lt;figcaption class=&quot;mt-2 text-sm text-center text-gray-500 dark:text-gray-400&quot;&amp;gt;
Photo by &amp;lt;a href=&quot;https://unsplash.com/@heylagostechie?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText&quot;&amp;gt;heylagostechie&amp;lt;/a&amp;gt; on &amp;lt;a href=&quot;https://unsplash.com/s/photos/tech-team?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText&quot;&amp;gt;Unsplash&amp;lt;/a&amp;gt;&amp;lt;/figcaption&amp;gt;
&amp;lt;/figure&amp;gt;&lt;/p&gt;
&lt;h3&gt;The right tool for the people involved&lt;/h3&gt;
&lt;p&gt;Every stack will be as useful as the hours and hair it saves for the people developing on it and operating it. Having said that, it&apos;s only obvious that your choice should be directly related to your team, whether it&apos;s present or not. Two distinct cases here, first would be that the team is already in place which means that the stack’s choice should be based on the team’s modus operandi and another
one, for when the team is work in progress.&lt;/p&gt;
&lt;p&gt;When the team is already in place or if the team consists of just yourself, the obvious choice would be to go with what they already know. However, there&apos;s certain fatigue after some time and people might lean into trying out something new, or the technology that the team is already working on might start to feel outdated. That&apos;s when you need to get in touch with your team leaders and really listen to them. Diversing your stack means that you’ll need to have people that are experts in the new field and besides that, you’ll have to factor in the cost for their learning, the technical debt and operations. There also has to be a critical mass that shares the same expertise, since you can’t rely on a single team member to be the source of truth. On the other hand, if you follow an already trusted technology stack, you will be able to deliver faster, the problems you will face will seem as just practice and there might already be a knowledge base for issues that may come up.&lt;/p&gt;
&lt;p&gt;Regarding the second case now, you&apos;re on a clean slate. You have the freedom to choose any person to join or lead your team, however I’d do some research first; I would research the developer communities for the stack choices I&apos;d have in mind, then look into their rates and activity and compare it with the prospect team size I&apos;d have in mind. It may sound trivia and you might think “oh! the internets will help me find candidates” but it may not seem as easy as it sounds. Building a development team, is actually more on making highly skilled professionals work well with each other so you might have to think twice about your team’s structure.&lt;/p&gt;
&lt;h3&gt;The right tool for the job&lt;/h3&gt;
&lt;p&gt;The tech stack is a fun playground where projects come into life, however, projects are defined by timeline, budget, project scope, integrations and scale, among others. Breaking them down into chunks, will help us decide what the right tech stack for the job is:&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Tight deadline, limited budget or disposable projects&lt;/strong&gt;: Most of the times, the right tool for such jobs is something that&apos;s “batteries included”, meaning that the suggested solution, should provide all the necessary tooling out of the box or in a way that&apos;s saving considerable amounts manual labor and time compared to another solution. For example, a rapid prototype for a product, a disposable “proof of concept” app for a startup, can be made in such a way by using a framework, CMS or site-builder
that’s basically designed around solving this problem fast and efficiently.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Highly performant projects&lt;/strong&gt;: For a project that is designed to receive thousands of requests per second right from the beginning of its lifetime, you may need to bring back in those performance benchmarks we mentioned earlier. Beware, there&apos;s a trap here: if you ask the project’s shareholders what the traffic projections for the project are going to be, they will most likely reply with massive numbers and it&apos;s expected, since they&apos;re vested into the business and they aim for success. However, experience shows that scaling issues will come up gradually and not from day one when the project is not part of an already ongoing project that’s facing massive traffic, allowing you to mitigate them which is cheaper than bullet-proofing your app to prevent them. I&apos;m not implying you should violate all rules and build something that&apos;s slow, all I&apos;m saying is don’t over-engineer or over-provision the project and don’t build for massive scale before you wet your feet first.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Projects that rely on integrations&lt;/strong&gt;: Usually, the integrations that need to take place,
are well established software that are either deeply integrated into the project (ie. the project is built around a specific piece of software by using some adapter), or act as an external service, often via an SDK that allows communication over networking. In the first case, the technology is sort of dictated by the integration itself, for example if you want to build a library around Ansible, it has to be done in Python because Ansible itself is written in Python and it would take a lot of effort to use another language, so the decision is easy. In the second case, there are more options however,
you need to check whether there is an SDK available and whether it&apos;s a stable, fully-featured library.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;Open source or portfolio projects&lt;/strong&gt;: These types of projects provide plenty of room for experimentation, and is actually the right place to show off and play around with various new technologies.&lt;/p&gt;
&lt;p&gt;&amp;lt;figure class=&quot;w-full&quot;&amp;gt;
&amp;lt;img class=&quot;h-auto max-w-full rounded-lg&quot; src=&quot;/images/tool.jpg&quot; alt=&quot;image description&quot; alt=&quot;tools&quot;&amp;gt;
&amp;lt;figcaption class=&quot;mt-2 text-sm text-center text-gray-500 dark:text-gray-400&quot;&amp;gt;
Photo by &amp;lt;a href=&quot;https://unsplash.com/@hnhmarketing?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText&quot;&amp;gt;Hunter Haley&amp;lt;/a&amp;gt; on &amp;lt;a href=&quot;https://unsplash.com/s/photos/tool-set?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText&quot;&amp;gt;Unsplash&amp;lt;/a&amp;gt;&amp;lt;/figcaption&amp;gt;
&amp;lt;/figure&amp;gt;&lt;/p&gt;
&lt;h3&gt;Choose technology that is already familiar, documented, proven &amp;amp; stable&lt;/h3&gt;
&lt;p&gt;One mistake most of us have ran into, when it comes to using some new technology is that we fall into the hype trap. The internet is a magical place full of potential, advertisements and hyped up new technologies and we, developers like to experiment with all the shiny new objects!&lt;/p&gt;
&lt;p&gt;While adding a piece of technology into your stack, you might need to look into the maturity, stability and documentation of it. It also has to be actively maintained and adequately documented. A large following on StackOverflow that would help you with the problems you &lt;em&gt;will&lt;/em&gt; run into, wouldn&apos;t hurt either.&lt;/p&gt;
&lt;p&gt;Speaking from experience, my team once integrated a fairly young document-based database into one of our projects and I bet you are familiar with the product owners’ line: “since this worked for this feature, let’s build on top of that for the next 5 features”. Pretty soon, after torturing ourselves with poor adoption which leads to poor documentation online compared to its older rivals, this database became mission-critical, but we were happy. Until one day they ran out of money and time, and their contributors decided to jump ship, and then we were sad. We didn‘t have the time
or resources to maintain the database ourselves of course, so we had to migrate everything into our existing PostgreSQL database which was introduced in 1996 and it’s still one of the most popular open source relational database systems available. This is of course an edge case and doesn’t happen every day, however I’ve seen frameworks, libraries and other utilities die frequently so try to be cautious by making sure everything my team relies upon is current, actively maintained, well documented, stable and most importantly, properly tested.&lt;/p&gt;
&lt;p&gt;Sometimes, using such technologies may sound boring and I’m pretty sure there will be other, fancier solutions that will sound much more interesting but that’s exactly the issue. Boring technologies are usually stable, well tested. Ruby on Rails for example, provides the same paradigms and interfaces for the past few years and that may seem boring. The counter-argument to that, is that it took many years for Rails to become “boring” and through the course of those years, thousands of commits were
added to make the framework a stable and reliable solution that powers billion-dollar companies all around the world.&lt;/p&gt;
&lt;h3&gt;If your team is small, a monolith is just fine&lt;/h3&gt;
&lt;p&gt;I think the title is self-explanatory here, but let me explain why in a screaming
&lt;a href=&quot;/img/posts/as.gif&quot;&gt;Adam Sandler&lt;/a&gt; voice: because of all the duplication and networking requests and the operational maintenance your team has to go through. Which leads
me to the next chapter:&lt;/p&gt;
&lt;h3&gt;Running &amp;amp; maintenance costs&lt;/h3&gt;
&lt;p&gt;There are two types of costs when working with a new technology stack: Implementation and running cost. The first is pretty straight forward, the latter can be predicted, but what about hidden costs?
For example how much does it cost to replace a certain piece of the stack and how much technical debt would it introduce?&lt;/p&gt;
&lt;p&gt;&amp;lt;figure class=&quot;w-full&quot;&amp;gt;
&amp;lt;img class=&quot;h-auto max-w-full rounded-lg&quot; src=&quot;/images/sib.jpg&quot; alt=&quot;Simple is beautiful&quot;&amp;gt;
&amp;lt;figcaption class=&quot;mt-2 text-sm text-center text-gray-500 dark:text-gray-400&quot;&amp;gt;
Photo by &amp;lt;a href=&quot;https://unsplash.com/@jeffreymwegrzyn?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText&quot;&amp;gt;Jeffrey Wegrzyn&amp;lt;/a&amp;gt; on &amp;lt;a href=&quot;https://unsplash.com/s/photos/simple-is-beautiful?utm_source=unsplash&amp;amp;utm_medium=referral&amp;amp;utm_content=creditCopyText&quot;&amp;gt;Unsplash&amp;lt;/a&amp;gt;&amp;lt;/figcaption&amp;gt;
&amp;lt;/figure&amp;gt;&lt;/p&gt;
&lt;h3&gt;Don&apos;t get fancy&lt;/h3&gt;
&lt;p&gt;They say “Simple is beautiful” and I can’t really stress this enough: if your technology stack uses 10 different technologies, you’ll have to multiply your web searches for hair pulling bugs, times 10, your package upgrades, your technical debt times 10 and so on. Keeping your stack as simple as you can, will help you focus less on administrative tasks and more on writing features and their tests,
that will help the business succeed.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;Choosing a technology stack is hard and requires a lot of thought. It’s also a process which affects you team’s day-to-day life and may introduce more costs than you initially accounted for. Don’t get blinded by shinny new objects, choose what works best for your team and your business. Even if it’s a fairly old, boring “been there, done that” piece of software.&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Testing your code is not optional.</title><link>https://falexandrou.dev/posts/2020-12-20-testing-your-code</link><guid isPermaLink="true">https://falexandrou.dev/posts/2020-12-20-testing-your-code</guid><description>The benefits of having an efficient and detailed testing suite in place</description><pubDate>Sun, 20 Dec 2020 00:00:00 GMT</pubDate><content:encoded>&lt;h3&gt;Let’s start with a story...&lt;/h3&gt;
&lt;p&gt;A quadrillion years ago, I used to work for companies none of which had automated tests in place and believe it or not, testing was not that popular; We wrote the code, tested manually on our local machines, deployed our newly committed piece of art and called it a day. If however, something went wrong, we tried to replicate the issue on a test environment, fixed it, deployed the fix and so on, usually hoping that the fix wouldn&apos;t introduce a &lt;a href=&quot;https://en.wikipedia.org/wiki/Software_regression&quot;&gt;regression&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;Then, writing unit tests somehow gained traction and became popular among developers who liked following best practices but still, there was a lot of ground to be covered in fast-moving startups and agencies, where delivering projects is a time-sensitive thing. I even remember debating the lead developer of a VC-backed startup, who at some point insisted that testing is a waste of time and that&apos;s why they didn&apos;t do it.&lt;/p&gt;
&lt;h3&gt;So, is testing our code a waste of time?&lt;/h3&gt;
&lt;p&gt;&lt;strong&gt;No, it&apos;s not!&lt;/strong&gt; Believe it or not though, this was (hopefully isn&apos;t anymore) a somehow acceptable opinion a few years back. While testing may require some development time, it actually helps save enormous amounts of time if done properly.&lt;/p&gt;
&lt;p&gt;Let&apos;s think of the following scenario:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;A developer works on Feature X for a few days, completes their work and pushes code to the testing environment.&lt;/li&gt;
&lt;li&gt;Then, a QA or the developer themselves tests manually, finds a couple of issues which are then fixed and the project hits production.&lt;/li&gt;
&lt;li&gt;The code then interacts with user-input or other existing data in a database and errors occur.&lt;/li&gt;
&lt;li&gt;The Product Owner raises a ticket, assigns the QA, the QA reproduces the issue, adds technical information to the ticket, sends it to the developer, they apply a fix and the ticket goes back n forth between the stakeholders until the code hits production.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;In the meantime, there might be regression issues and the back n forth continues until at some point the ticket closes, while several others have been raised.&lt;/p&gt;
&lt;p&gt;Chaos? Yes. Reminds you of something? It certainly does for me. This was the daily, fire-fighting routine for the lead developer that was mentioned earlier.&lt;/p&gt;
&lt;p&gt;Having an efficient and detailed testing suite in place, saves you some of this trouble; The code and tests are written, the QA does what they&apos;re supposed to do,
(make sure the finished product meets certain quality standards), instead of getting blocked becaused they stumbled upon a tiny mistake that could have been caught beforehand.
Ideal? Maybe. Let&apos;s now break down the several ways we can save time.&lt;/p&gt;
&lt;h3&gt;Testing is a safety net, but also a documentation and developer onboarding tool&lt;/h3&gt;
&lt;p&gt;Usually, the scope of a function is (or should be) limited and, depending on the input, there are certain outcomes. Covering a few default cases, should be a nice way to test said function and save ourselves (and the QA) some time being stopped by runtime errors that would easily be prevented.&lt;/p&gt;
&lt;p&gt;If however not all cases were covered and some bug gets reported, it&apos;s fine (in fact great!) because this means we&apos;re enhancing the testing suite with another case,
making our code even more resilient.&lt;/p&gt;
&lt;p&gt;Along with this benefit, there lies another: Tests are a great (yet, not the only) documentation and onboarding tool, which
means that they should be clear, descriptive and be able to provide a high level overview of what the code should accomplish.&lt;/p&gt;
&lt;p&gt;For example, here is an excerpt from a few tests I wrote for a Django view that launches deployment operations:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def it_creates_and_runs_an_operation_given_a_commit_reference(mock_user, create_operation_data):
    # ...
    assert response.status_code == 201
    assert response.data[&apos;id&apos;]
    assert response.data[&apos;kind&apos;] == &apos;deployment&apos;
    # ...
    assert operation.is_running

def it_creates_and_runs_an_operation_without_a_commit_reference(...):
    # ....
    assert operation.is_running
    assert mock_celery_task.call_count == 1
    assert operation.reference == expected_commit_information[&apos;reference&apos;]

def it_does_not_start_the_operation_when_another_is_queued_prior_to_it(...):
    # Create an operation that is queued prior to the one we&apos;re queueing
    prior = operation_factory.create(...)
    # ...
    assert response.status_code == 201
    # ...
    assert not operation.can_start()
    assert not operation.is_running
    assert not mock_celery_task.call_count
    assert operation.is_queued
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;We now have of tests, groupped together with descriptive names, that provide an outline that an operation can be started with or without a given reference, and what the expected statuses for the model in the database and the returned HTTP status code should be. When I revisit this example in the future, or whenever a new hire joins the project, it will hopefully be obvious to them, what the expected behavior should be without having to read long documentation. Also, it covers three of the most common cases in the system, related to the &quot;operations&quot; feature.&lt;/p&gt;
&lt;h3&gt;Keeping tests fast &amp;amp; testing the right things&lt;/h3&gt;
&lt;p&gt;Assuming you are using a web framework like Django or Ruby on Rails and you need to test an ORM or an ActiveRecord model, it doesn&apos;t make any sense to test the &lt;code&gt;.save&lt;/code&gt; or &lt;code&gt;.get&lt;/code&gt; methods, because they&apos;re already tested for you by the amazing people that build the framework of choice. However, testing a custom validator you have added to your model is a requirement.&lt;/p&gt;
&lt;p&gt;The scope of a test should be limited as well. Let&apos;s take the following real-world example where we use pytest (with some syntactic sugar) to test a &lt;a href=&quot;https://docs.celeryproject.org/en/stable/userguide/index.html&quot;&gt;celery task&lt;/a&gt; that starts a queued deployment operation. The &lt;code&gt;create_container&lt;/code&gt; function runs a Docker container, but testing this is a) out of the given scope and b) slow if we create the actual container. The solution to this issue would be to &lt;a href=&quot;https://en.wikipedia.org/wiki/Mock_object&quot;&gt;mock&lt;/a&gt; the &lt;code&gt;create_container&lt;/code&gt; call within the current scope:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;@pytest.mark.django_db
def describe_start_operation_task():
    def it_runs_the_task(queued_deployment, mock_container):
        allow(tasks).create_container.and_return(mock_container)
        assert not queued_deployment.task_id
        assert not queued_deployment.container_id

        # run the task
        celery_task = tasks.start_operation.s(operation_id=queued_deployment.id).apply_async()

        # make sure the deployment fields were populated accordingly
        queued_deployment.refresh_from_db()
        assert queued_deployment.task_id == celery_task.id
        assert queued_deployment.container_id == mock_container.id
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;&lt;a href=&quot;https://twitter.com/sandimetz&quot;&gt;Sandi Metz&lt;/a&gt; and &lt;a href=&quot;https://twitter.com/kytrinyx&quot;&gt;Katrina Owen&lt;/a&gt; in their great book &lt;a href=&quot;https://sandimetz.com/99bottles&quot;&gt;99 Bottles of OOP&lt;/a&gt;, clearly and constantly state that the tests should run upon every line of code that gets changed and they&apos;re right. In practice, this means that you should be able to run all your test suite as fast as possible.&lt;/p&gt;
&lt;p&gt;In our example, we managed to save quite some time by mocking the docker container creation, which is handled internally by a well tested library. Other examples might include external network requests, big files loaded, email sending etc.&lt;/p&gt;
&lt;p&gt;You might also have noticed in my examples that there is a certain degree of duplication. This is fine. The tests are not a place to practice &lt;a href=&quot;https://en.wikipedia.org/wiki/Don%27t_repeat_yourself&quot;&gt;DRY&lt;/a&gt;; they
should be explicit and detailed in order for the other person to be able to understand every tiny bit of the process, but also, doing fancy things usually leads to errors and you do want your tests to be straight-forward and valid.&lt;/p&gt;
&lt;h3&gt;Meet your new best friends&lt;/h3&gt;
&lt;p&gt;In the examples provided above, I introduced a few concepts that you might not be familiar with so far, but will become your new best friends when you dive deep into testing: Factories, Fakes, Fakes, Stubs and Mocks. Here&apos;s some basic guidelines that would help you decide what you need each time:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;If you want an object and with random data, register a factory.&lt;/li&gt;
&lt;li&gt;If you need an object with specific, predictable data, go for a fixture.&lt;/li&gt;
&lt;li&gt;If you need a simplified version of a method or a class, with limited functionality, fake it.&lt;/li&gt;
&lt;li&gt;If you need a set of predefined data returned by a method, stub it.&lt;/li&gt;
&lt;li&gt;If you just need to count the calls to a method, mock it.&lt;/li&gt;
&lt;/ul&gt;
&lt;h4&gt;How having a proper test suite improves your code&lt;/h4&gt;
&lt;p&gt;Writing code with testability in mind, eventually leads to cleaner code, simply because in order to keep the tests simple, one has to write short-scoped code that is limited to specific things. Long methods that do several things are as hard to test as to maintain so, having testability in mind, meaning that you &lt;strong&gt;have&lt;/strong&gt; to test such pieces of code, practically acts as an incentive to keep the code maintainable.&lt;/p&gt;
&lt;p&gt;Finally, refactoring and cleanup tasks, are a lot safer when the code is covered by reliable tests that will stop your deployment when something breaks, which makes such tasks a breeze for the developer.&lt;/p&gt;
&lt;h3&gt;There&apos;s no strategy that&apos;s better than others&lt;/h3&gt;
&lt;p&gt;I am aware that picking a testing strategy and policy could be a difficult task, however I don&apos;t think that a team should stress about it, also it should be a choice
that&apos;s adapted to the team&apos;s modus operandi, so that they adapt easily to it and it doesn&apos;t seem like a huge shift from what they had been doing so far.&lt;/p&gt;
&lt;p&gt;If the team wants to practice &lt;a href=&quot;https://en.wikipedia.org/wiki/Behavior-driven_development&quot;&gt;BDD&lt;/a&gt; and that&apos;s something that suits the project, it should be OK. If the team doesn&apos;t feel OK about doing &lt;a href=&quot;https://en.wikipedia.org/wiki/Test-driven_development&quot;&gt;TDD&lt;/a&gt; where they should write tests in advance, that should be fine as well, because the purpose is to save the team&apos;s time, not force strict policies.
I am aware that methodologies exist for a reason, but experience shows that as long as the team are not practicing&lt;a href=&quot;https://www.hanselman.com/blog/fear-driven-development-fdd&quot;&gt;FDD&lt;/a&gt;, it should be OK.&lt;/p&gt;
&lt;h3&gt;Conclusion&lt;/h3&gt;
&lt;p&gt;No matter how experienced or good we are at what we do, we will make that mistake. In order to avoid that annoying RuntimeError that will disrupt our user&apos;s delightful experience, we have to make sure that our code works well no matter what and luckily, we have the tools to do so. Embrace testing, focus on covering as many scenarios as you can and feel comfortable deploying. Even on Friday...&lt;/p&gt;
&lt;h3&gt;Further reading&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;&lt;a href=&quot;https://web.stanford.edu/class/archive/cs/cs107/cs107.1212/testing.html&quot;&gt;Software Testing Strategies&lt;/a&gt; - Written by Julie Zelenski, with modifications by Nick Troccoli&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://martinfowler.com/bliki/TestDouble.html&quot;&gt;TestDouble&lt;/a&gt; - Martin Fowler&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://martinfowler.com/articles/mocksArentStubs.html&quot;&gt;Mocks Aren&apos;t Stubs&lt;/a&gt; - Martin Fowler&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://blog.pragmatists.com/test-doubles-fakes-mocks-and-stubs-1a7491dfa3da&quot;&gt;Test Doubles — Fakes, Mocks and Stubs&lt;/a&gt; - Michal Lipski&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><author>Fotis</author></item><item><title>How to fix Sectigo&apos;s expired root certificates</title><link>https://falexandrou.dev/posts/2020-05-30-sectigo-expired-root-certificates</link><guid isPermaLink="true">https://falexandrou.dev/posts/2020-05-30-sectigo-expired-root-certificates</guid><description>Fixing an issue that emerged</description><pubDate>Sat, 30 May 2020 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;As of today (May 30th 2020), Sectigo&apos;s root certificates that are usually bundled with any SSL purchase (in my case it was on February 2020, just 3 months ago), are due to expire.
Here&apos;s a short post on how to deal with this, so that you don&apos;t pull your hair as I did.&lt;/p&gt;
&lt;h3&gt;The problem&lt;/h3&gt;
&lt;p&gt;You might not be able to identify the issue at once, the browser will display the SSL certificate just fine as it&apos;s still valid, however if you have any &lt;code&gt;curl&lt;/code&gt; calls or in my case,
alerting software such as Pingdom or OpsGenie, you will be getting alerts.&lt;/p&gt;
&lt;p&gt;I initially thought it was some system issue but turns out it wasn&apos;t, once I ran an &lt;a href=&quot;https://www.ssllabs.com/ssltest&quot;&gt;SSL test via SSL Labs&lt;/a&gt; which showed my intermediate certificates as expired.&lt;/p&gt;
&lt;h3&gt;The solution&lt;/h3&gt;
&lt;ul&gt;
&lt;li&gt;Head over to this &lt;a href=&quot;https://support.sectigo.com/articles/Knowledge/Sectigo-AddTrust-External-CA-Root-Expiring-May-30-2020&quot;&gt;support announcement from Sectigo&lt;/a&gt; and don&apos;t get lured by the &quot;You don’t have to reinstall your certificates&quot; thing. It&apos;s clear that it only refers to sites that are being accessed through browsers.&lt;/li&gt;
&lt;li&gt;Scroll down and try to identify the modern roots (COMODO RSA/ECC Certification Authority and USERTrust RSA/ECC Certification Authority) and pick the one according to your Certification Authority.&lt;/li&gt;
&lt;li&gt;On the pages that open, search for &quot;Download&quot; and download the new roots. Weirdly enough, you&apos;ll get file names that only contains digits. Rename them to &lt;code&gt;USERTrustRSAAddTrustCA.crt&lt;/code&gt; and &lt;code&gt;AddTrustExternalCARoot.crt&lt;/code&gt; accordingly.&lt;/li&gt;
&lt;li&gt;Find (or download again) your SSL certificate package, and copy the folder with a different name (eg. &lt;code&gt;STAR_hudabeauty_com_new_sectigo_root&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Replace the files &lt;code&gt;AddTrustExternalCARoot.crt&lt;/code&gt; and &lt;code&gt;USERTrustRSAAddTrustCA.crt&lt;/code&gt; with the ones you had just downloaded&lt;/li&gt;
&lt;li&gt;Chain the certificate again using the order required. On a *nix system the command should look something like that:&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;cat YOUR_certificate_name.crt \
    AddTrustExternalCARoot.crt \
    SectigoRSADomainValidationSecureServerCA.crt \
    USERTrustRSAAddTrustCA.crt &amp;gt; YOUR_chained_certificate.crt
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;You now have a new SSL certificate in place, you can copy it over to your server or use it in your Certificate Manager (if you&apos;re using any).&lt;/p&gt;
&lt;p&gt;Hope this saved you some time.&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Using a React Component as a Layout with ReactOnRails</title><link>https://falexandrou.dev/posts/2019-03-10-react_on_rails_jsx_layout</link><guid isPermaLink="true">https://falexandrou.dev/posts/2019-03-10-react_on_rails_jsx_layout</guid><description>React and Ruby on Rails is a match made in heaven</description><pubDate>Sun, 10 Mar 2019 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;New year, new codebase for a &lt;a href=&quot;https://stackmate.io&quot;&gt;side-project of mine&lt;/a&gt;, and I decided to go with &lt;a href=&quot;https://github.com/shakacode/react_on_rails&quot;&gt;ReactOnRails&lt;/a&gt;, turbolinks and other sorcery that will help me go faster but won&apos;t sacrifice code quality.&lt;/p&gt;
&lt;p&gt;An issue I found with this approach is that it wasn&apos;t clear to me how to render a React Component as a layout, since the &lt;code&gt;yield&lt;/code&gt; call in my &lt;code&gt;application.html.erb&lt;/code&gt; would render the &lt;code&gt;&amp;lt;%= react_component ... %&amp;gt;&lt;/code&gt; part in the view, as described in the ReactOnRails README file. Here&apos;s the way I found in order to do so:&lt;/p&gt;
&lt;p&gt;Added a react_layout method in my controller. That way I can overload this method in child classes&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# file app/controllers/application_controller.rb
class ApplicationController &amp;lt; ActionController::Base
  def react_layout
    &apos;LayoutDefault&apos;
  end

  def react_layout_props
    { user: { id: current_user.id } } # ... fill in your layout props
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Created a &lt;code&gt;ReactHelper&lt;/code&gt; module with a &lt;code&gt;react_view&lt;/code&gt; wrapper method, which utilizes &lt;code&gt;react_component&lt;/code&gt; internally.&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# file app/helpers/react_helper.rb
module ReactHelper
  def react_view viewname, props: {}, layout: nil, layout_props: {}
    layout ||= controller.react_layout
    layout_component_props = controller.react_layout_props.merge(layout_props)

    react_component(layout, props: layout_component_props.merge({
      component: viewname,
      componentProps: props,
    }))
  end
end
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Replaced calls to &lt;code&gt;react_component&lt;/code&gt; with &lt;code&gt;react_view&lt;/code&gt;, for example&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# file app/views/dashboard/index.html.erb
&amp;lt;%= react_view(&quot;DashboardView&quot;, props: @props) %&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Dynamically rendered the component requested in the view inside &lt;code&gt;LayoutDefault.jsx&lt;/code&gt;&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// store this in a location resolvable by webpacker
import React from &apos;react&apos;;
import PropTypes from &apos;prop-types&apos;;

const LayoutDefault = ({ component, componentProps }) =&amp;gt; {
  const ViewComponent = ReactOnRails.getComponent(component);

  return (
    &amp;lt;React.Fragment&amp;gt;
      ... Layout area ...
      &amp;lt;ViewComponent.component {...componentProps} /&amp;gt;
      ... Layout area ...
    &amp;lt;/React.Fragment&amp;gt;
  );
};

LayoutDefault.propTypes = {
  component: PropTypes.string.isRequired,
  componentProps: PropTypes.object,
};

LayoutDefault.defaultProps = {
  componentProps: {},
};

export default LayoutDefault;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;And done. The app now renders the react layout and the component I&apos;ve requested in my view inside.&lt;/p&gt;
&lt;p&gt;Pro tip: You need to declare &lt;code&gt;ReactOnRails&lt;/code&gt; as a global in your eslint config if you&apos;re using a linter (which you should)&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Understanding variable assignment in JavaScript</title><link>https://falexandrou.dev/posts/2018-03-21-understanding-assignments-in-javascript</link><guid isPermaLink="true">https://falexandrou.dev/posts/2018-03-21-understanding-assignments-in-javascript</guid><description>How the newest JavaScript versions handle variable assignments</description><pubDate>Wed, 21 Mar 2018 00:00:00 GMT</pubDate><content:encoded>&lt;h3&gt;Hello &lt;code&gt;var&lt;/code&gt;, old friend&lt;/h3&gt;
&lt;p&gt;In days of yore, using &lt;code&gt;var&lt;/code&gt; was the only way to declare a variable. Those were interesting times, when developers dealt with hoisting all the time, leading to hair-pulling bugs and unexpected behavior. To better understand what hoisting is, think of the following example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;a = 5;
var a;

console.log(a);
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This would actually print &lt;code&gt;5&lt;/code&gt; instead of &lt;code&gt;undefined&lt;/code&gt;, since the compiler &lt;em&gt;hoist&lt;/em&gt; the variable declaration within the associated scope and translate the snippet above into&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;var a;
....
a = 5;
console.log(a)
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Same principle applies to functions so the following snippet&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;foo();

function foo() {
  // magic here
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;where the function declaration will get hoisted within the scope but please note this isn&apos;t the case of function expressions, which means that the following example&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;foo();

var foo = function () {};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;would throw a &lt;code&gt;TypeError&lt;/code&gt; such as &lt;code&gt;Uncaught TypeError: foo is not a function&lt;/code&gt;&lt;/p&gt;
&lt;h3&gt;ES2015: &lt;code&gt;let&lt;/code&gt; and &lt;code&gt;const&lt;/code&gt; are introduced&lt;/h3&gt;
&lt;p&gt;Fast forward a few years, the &lt;code&gt;let&lt;/code&gt; and &lt;code&gt;const&lt;/code&gt; keywords got introduced which, along with &lt;code&gt;var&lt;/code&gt;, are more ways to bind declarations to their associated scope. By using &lt;code&gt;let&lt;/code&gt; we&apos;re able to declare bindings that can be reassigned, such as&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;let a = 5;
...
a = i + b + n;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and so on. This isn&apos;t the case with &lt;code&gt;const&lt;/code&gt; though, since creates bindings that can only be assigned once, meaning that a block like the following&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const a = 5;
a = 12;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;would throw a &lt;code&gt;TypeError&lt;/code&gt; such as &lt;code&gt;Uncaught TypeError: Assignment to constant variable&lt;/code&gt; and so would assignment operators do like &lt;code&gt;a += 10&lt;/code&gt;, &lt;code&gt;a++&lt;/code&gt; or even bitwise operators like &lt;code&gt;a &amp;lt;&amp;lt;= 1&lt;/code&gt;.&lt;/p&gt;
&lt;p&gt;Now, a common misconception that people make (myself included in the past), is that &lt;code&gt;const&lt;/code&gt; creates immutable variables which is not the case. &lt;code&gt;const&lt;/code&gt; variables are perfectly mutable as shown in the following (working) example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const obj = { a: 1, b: 2 };
obj.c = 3;

console.log(obj);
// Prints  {a: 1, b: 2, c: 3}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In order to actually make an object immutable, one should use the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze&quot;&gt;&lt;code&gt;Object.freeze()&lt;/code&gt;&lt;/a&gt; method which on strict mode would raise an error, or fail silently otherwise. Please note though, this only performs a shallow freeze, which means we&apos;re still able to mutate the properties of the nested &lt;code&gt;obj.o&lt;/code&gt; object as on the following (working) example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const obj = Object.freeze({
  a: 1,
  o: { name: &quot;G.I. Joe&quot;, cobra: false },
});

obj.o.cobra = true;

console.log(obj);
// Prints { a: 1, o: { name: &quot;G.I. Joe&quot;, cobra: true } }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;To make matters a bit more complex, JavaScript provides the &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/seal&quot;&gt;&lt;code&gt;Object.seal()&lt;/code&gt;&lt;/a&gt; method, which prevents new properties from being added to an object and marks all existing properties as non-configurable, but also &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/preventExtensions&quot;&gt;&lt;code&gt;Object.preventExtensions()&lt;/code&gt;&lt;/a&gt; which only prevents new properties from ever being added to an object.&lt;/p&gt;
&lt;p&gt;If we&apos;d like to compare the three, we&apos;d summarize as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/freeze&quot;&gt;&lt;code&gt;Object.freeze()&lt;/code&gt;&lt;/a&gt; makes an object&apos;s properties immutable, while nested objects&apos; properties are still mutable. Example in non-strict mode:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const frozen = Object.freeze({ a: 1, b: 2 });
frozen.a = 4;
delete frozen.b; // returns false
console.log(frozen);
// Prints the object in its original shape  { a: 1, b: 2 }
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/seal&quot;&gt;&lt;code&gt;Object.seal()&lt;/code&gt;&lt;/a&gt; allows properties to be changed, prevents them from being added or deleted. Example in non-strict mode:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const sealed = Object.seal({ a: 1, b: 2 });
sealed.a = 5; // property &quot;a&quot; now has a new value
sealed.c = 12;
delete sealed.b; // returns false
console.log(sealed);
// Prints the object without the new properties added but with property &quot;a&quot; mutated  {a: 5, b: 2}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;&lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object/preventExtensions&quot;&gt;&lt;code&gt;Object.preventExtensions()&lt;/code&gt;&lt;/a&gt; allows properties to be changed and deleted, prevents new properties from being added. Example in non-strict mode:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;const prev = Object.preventExtensions({ a: 1, b: 2 });
prev.a = 5;
delete prev.b; // returns true
console.log(prev);
// Prints the object with a new value for property &quot;a&quot;, without the deleted property &quot;b&quot;  {a: 5}
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;h5&gt;Credits&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;&quot;ES2015 const is not about immutability&quot; by &lt;a href=&quot;https://mathiasbynens.be/notes/es6-const&quot;&gt;Mathias Bynens&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&quot;Hoisting&quot; on &lt;a href=&quot;https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20%26%20closures/ch4.md#chapter-4-hoisting&quot;&gt;You Dont Know JS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&quot;Block Scoping Revisited&quot; on &lt;a href=&quot;https://github.com/getify/You-Dont-Know-JS/blob/master/scope%20%26%20closures/ch5.md#block-scoping-revisited&quot;&gt;You Dont Know JS&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Object methods on &lt;a href=&quot;https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Object&quot;&gt;MDN&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><author>Fotis</author></item><item><title>Being efficient as a remote worker: a study on productivity</title><link>https://falexandrou.dev/posts/2018-03-12-being-efficient-as-a-remote-worker</link><guid isPermaLink="true">https://falexandrou.dev/posts/2018-03-12-being-efficient-as-a-remote-worker</guid><description>Working remotely is no different than any office job — or at least it shouldn&apos;t be</description><pubDate>Mon, 12 Mar 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Either as contractor or full-time employee roles, remote work is a popular trend globally as many companies prefer this type of distributed setup in order to take geographic restrictions and other limitation out of the equation that sets the path to their success. Working remotely is no different than any office job — or at least it shouldn&apos;t be — so being a remote worker myself, I thought it would be handy to document the processes I use to work &amp;amp; collaborate effectively as I would on an on-site job.&lt;/p&gt;
&lt;p&gt;&amp;lt;img src=&quot;/images/rw1.jpg&quot; alt=&quot;Being efficient as a remote worker: a study on productivity&quot; class=&quot;image&quot;&amp;gt;&lt;/p&gt;
&lt;h3&gt;The workplace and dress code&lt;/h3&gt;
&lt;p&gt;First and most important is having an isolated space. Personally, I was lucky enough to own a tiny space (I think the word studio is appropriate), in walking distance from my house, that I decided to renovate and convert it into an office space. This gives me the benefit of having to walk to work, which means I have to wear clothes, shoes and actually go to work. Let me quickly refer back to the word &quot;isolated&quot; now, which for me is a key ingredient of efficient remote work; There&apos;s no absolute need to rent or buy some space outside your house if it&apos;s spacey, it may as well work if there&apos;s a spare room or a quiet corner that you can isolate yourself into and can be named &quot;the office&quot;.&lt;/p&gt;
&lt;p&gt;Attitude is a big part of effective remote work and I&apos;ve found that having a dress code is important; One wouldn&apos;t walk into an office in their jammies or a t-shirt with a ketchup stain on, so why should they do it when working remotely? After all, video calls are part of the game, so being presentable is a must.&lt;/p&gt;
&lt;p&gt;Also, forcing yourself to walk for a few minutes (go to the cafe and grab a cup of coffee for example), can prove to be quite helpful as it stimulates the brain, creates the illusion of commuting and prevents you from staying in for days if the office is located inside the house.&lt;/p&gt;
&lt;p&gt;Last but not least, clearly stating to yourself and others that you&apos;re &quot;at work&quot; sets the tone for how you and your circles approach remote work.&lt;/p&gt;
&lt;h3&gt;It&apos;s all about routine&lt;/h3&gt;
&lt;p&gt;Daily commute is part of the daily routine for office workers and so is the morning coffee preparation and chit-chat with their co-workers. Remote work eliminates the word &quot;commute&quot; from the last sentence, but it shouldn&apos;t alter the essence of it, meaning there should be a routine in place. The main reason behind having a routine, is that our brain has a limited number of decisions it can afford per day, so having a few decisions taken care of, along with a steady sleep pattern, reserves energy and congnitive strength for the hard part of every day: the actual work.&lt;/p&gt;
&lt;p&gt;Having a strict timeframe where you walk in and out of the office, is really helpful and can prevent one of the major downsides of working remotely: working all day (and / or night). Yes, contrary to popular belief, remote workers actually work very hard and are very engaged in their work so working the long hours is a common, harmful pattern. Without going into too much detail, working the long hours pattern is harmful because you need energy and time for both working the next day but also for the parts of the day that don&apos;t involve work such as friends and family, so having a clear time frame which you should utilize to accomplish everything, has an incremental effect on your productivity.&lt;/p&gt;
&lt;h3&gt;Planning everything&lt;/h3&gt;
&lt;p&gt;While trying to avoid the long discussion about estimations — let&apos;s assume that the estimations are close to perfect for now — let me just say that missing a deadline is a terrible thing, but it can happen to anyone. Another downside to that is that while the deadline is approaching, one may find themselves working the long hours and derailing their day to day schedule. Planning right and planning daily is the one technique I&apos;ve found to be most beneficial.&lt;/p&gt;
&lt;p&gt;&amp;lt;img src=&quot;/images/rw3.jpg&quot; alt=&quot;Planning everything&quot; class=&quot;image&quot;&amp;gt;&lt;/p&gt;
&lt;p&gt;For example, I always have a notebook and a sharpened pencil right next to me. This helps me quickly draw a diagram or write down the basic outline of a class, but I also reserve one page every day for my daily schedule. That being said, I start off every week by outlining the high-level tasks I&apos;m gonna work on, assign the days to them, then quickly move to scheduling my entire day and I mean everything; I break down every task into subtasks that I estimate how long are going to take, then write them down with the actual time of the day that I&apos;m gonna work on them and apply the same pattern to breaks, conference calls, lunch and so on. This works like a charm because of two reasons; First, breaking down a task into tiny pieces, no longer makes the task intimidating (even if one is extremely experienced, a big task is more intimidating than they might think) and instead of over-thinking it, the steps to achieve the task are now in front of me. Second, it&apos;s an exercise for my estimation skills which means that I daily practice my answer to the question &quot;how long is this going to take?&quot;&lt;/p&gt;
&lt;h3&gt;Pomodoro technique and Deep work&lt;/h3&gt;
&lt;p&gt;The Pomodoro technique is a well known practice among software engineers; You set a timer to 25 minutes, completely focus on a task for 25 minutes, then take a 5 minute break and repeat for all the tasks in your list. This sounds very interesting but I found out that it doesn&apos;t actually work for me, as there usually are tasks that require my focus for more than 25 minutes, so I resorted to Cal Newport&apos;s approach on &lt;a href=&quot;/2018/01/14/deep-work-book-productivity/&quot;&gt;Deep Work&lt;/a&gt;. Newport in his book, states that large portions of deep, focused and uninterrupted work are key to producing great output but he also mentions (among other Deep Work patterns) that some people achieved optimum performance when 90 minute intervals of deep, meaningful work (eg. engineering a demanding feature on a website) are followed by 90 minute of shallow work (eg. a video call or going through your emails) so, following that example I organize my tasks in a way that follow that pattern. I try to be somehow loose about it and not restrain myself to the 90 minute limit, so I group my tasks into 70-100 minute windows and follow up with just 30-45 minutes of shallow work when necessary or a short break.&lt;/p&gt;
&lt;h3&gt;Avoiding distractions &amp;amp; Prioritizing communication&lt;/h3&gt;
&lt;p&gt;There are two ways a distraction can work: you&apos;re either the distracted or the distracting person and it has the same damaging effect; The other party has to shift their focus from their current task to the distraction, then once they&apos;re done with this, shift focus again to what they were previously doing. This is terrible because most of the times when being in a state of &quot;flow&quot; or deep work (the state that you&apos;re so focused that you perform the work with the same ease an established musician plays their hit song), it takes some time to get back to it. The solution to that, is having a policy about distractions and prioritising communications, while clearly communicating this policy with your colleagues and inner circle.&lt;/p&gt;
&lt;p&gt;Having a policy about distractions means that you actually take some time and customize the notifications that are sent on your smartphone (the source of all distraction), unsubscribe from newsletters you don&apos;t actually read, ruthlessly report as spam every email that you haven&apos;t signed up for and quitting social media. I wouldn&apos;t advise to go too far and announce &quot;Hey everyone! I&apos;m quitting social media! That&apos;s it. We had a good run but now I&apos;m off, deleted everything&quot;, just quit (and turn their notifications off). The benefits are remarkable, you&apos;re not missing out to anything — unless you think it&apos;s a matter of life and death that you read your friend&apos;s rant about today&apos;s headlines — and you&apos;ll find yourself having a lot more time to be really social (as in meeting someone in person).&lt;/p&gt;
&lt;p&gt;&amp;lt;img src=&quot;/images/rw4.jpg&quot; alt=&quot;Avoid distractions &amp;amp; Prioritize communication&quot; class=&quot;image&quot;&amp;gt;&lt;/p&gt;
&lt;p&gt;Prioritizing communication is another key aspect; My policy here is that I never call anyone impromptu, or broadcast messages that might trigger a notification to our messaging tool, have customized notification policies for every software that triggers one, then apply the following pattern: Most of the times the issue I&apos;d like to discuss is not urgent, nor requires the immediate attention of other people involved so, an email or a public message on the communication channel would do. If the issue I need to communicate to another person for is kind of urgent but can wait for a couple of hours, I&apos;d send a direct message, while on the other hand if the issue is critical (meaning that it&apos;s a matter of life and death for the business I&apos;m affiliated to — for example a service is down), I&apos;ll try to get face time. This strategy can be communicated with your colleagues, supervisor and inner circle, as not all people are familiar with the effort required to stay focused at all cost — for example you might be the only person working remotely in the company.&lt;/p&gt;
&lt;p&gt;Avoiding distractions also means that you&apos;re not interrupting what you do in order to reply to emails the moment they arrive, daydreaming about a weekend on the beach, preparing coffee, grabbing a soda, having a friend by and so on; The office is isolated for a reason, it should remain as such and remember the attitude: you&apos;re at work.&lt;/p&gt;
&lt;h3&gt;Getting stuck&lt;/h3&gt;
&lt;p&gt;The hardest part of working remotely is getting stuck; it&apos;s very easy to feel in despair and start spreading distractions like a disease, so the ultimate cure is prevention. First, admit that getting stuck is part of the job description as you&apos;re trying to solve hard problems all by yourself, isolated, staring at your screen(s) so before you take on a difficult task, try to identify issues that may cause you to google repeatedly for a considerable amount of time and pick your colleagues&apos; brains about them. If you&apos;re already in the middle of the task and you can&apos;t distract anyone, try talking to yourself — it&apos;s not as embarassing as it sounds — a technique called &lt;a href=&quot;https://en.wikipedia.org/wiki/Rubber_duck_debugging&quot;&gt;rubber duck debugging&lt;/a&gt;, as if you would explain the issue to a complete stranger. In case that doesn&apos;t work, just distance yourself from the problem and try to see it as an external party whilst, if you have the luxury of time, engage on some physical activity like a short walk. Walking helps you think because it stimulates your brain and the less busy the walking path is, the better (ie. prefer a walk in a park over a busy street where your brain is actively trying to avoid obstacles). Finally, do a bit of self exploration, because the danger of getting stuck due to a personal issue might be lurking there (for example being tired, or in a bad mood because of an argument with a loved one). Notable mention: don&apos;t panic, there are always other approaches you can tackle an issue by.&lt;/p&gt;
&lt;h3&gt;Keeping a worklog&lt;/h3&gt;
&lt;p&gt;Sharing updates with your supervisor or clients is important while working remotely. It&apos;s an active way to stay in touch and eliminates the need — that you or your supervisors might feel — to prove that you&apos;re working by instantly replying to messages or emails. It doesn&apos;t have to be super detailed, just an outline of what your current status and what your plan for the next day are. This could as well be a scrum call or a regular email that gets sent out at the end of the day.&lt;/p&gt;
&lt;h3&gt;Shutdown ritual&lt;/h3&gt;
&lt;p&gt;Being disconnected is key in being productive so at the end of the day in my case, I just make sure everything that was planned for the day is done, I send out any last emails that are important for the following days, plan the next day and just disconnect. I walk home and then do only what&apos;s not work related for example play board games with my daughter, have dinner with my family, read books, watch movies, study, practice and sleep. I avoid checking on emails and phone (with the exception of monitoring tools that might occasionally notify me if there&apos;s a critical issue in some service that requires my attention — ie. a website is struggling or down).&lt;/p&gt;
&lt;p&gt;&amp;lt;img src=&quot;/images/rw5.jpg&quot; alt=&quot;Don&apos;t miss out on fun and games&quot; class=&quot;image&quot;&amp;gt;&lt;/p&gt;
&lt;h3&gt;Socializing, coffe-breaks&lt;/h3&gt;
&lt;p&gt;Working remotely means that you don&apos;t spend time in restaurants or bars with other colleagues after work, you don&apos;t have lunch in the company cafeteria, or the mornign cup of java (no pun intended) while exchanging jokes with a teammate. This is an issue that has to be addressed, since being social is essential to being human. Some suggestions: Having a hobby that requires one to go somewhere, for example, I took guitar lessons every Friday (yes, I was 32 at the time) and it was great because it was a nice chance to disconnect, focus on something that is completely unrelated to what I do every day and interact with other humans (apart from when I studied guitar at home, where nobody wanted to be nearby — apparently I&apos;m not good at guitar, but that&apos;s a different subject). Meeting friends for a cup of coffee, movies or drinks is undoubtedly required and helps eliminate the danger of being sucked at what you do, sets a marker on the daily schedule and reminds us what being human is all about: interacting with other humans.&lt;/p&gt;
&lt;h3&gt;Bonus point: reading lunch breaks&lt;/h3&gt;
&lt;p&gt;A 45 minute lunch break is vital for the routine, as food provides the energy and stimulates parts of the brain that are responsible for feeling happy. Being away from screens while having lunch is important so I&apos;ve found that reading a few pages that work related (for example science fiction, literature, essays), improves my focus throughout the day and if I can fit a 10 minute walk in this time interval it can work like magic. Only issue with that, is that it only works when the schedule is not very demanding, otherwise it can act as a distraction.&lt;/p&gt;
&lt;h5&gt;Credits&lt;/h5&gt;
&lt;ul&gt;
&lt;li&gt;Images were downloaded from &lt;a href=&quot;https://www.unsplash.com&quot;&gt;Unsplash&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Why walking helps us think an article on &lt;a href=&quot;https://www.newyorker.com/tech/elements/walking-helps-us-think&quot;&gt;The New Yorker&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Stanford&apos;s take on how walking improves creativity on &lt;a href=&quot;https://news.stanford.edu/2014/04/24/walking-vs-sitting-042414/&quot;&gt;their blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;Buffer&apos;s State of Remote Work 2018 on &lt;a href=&quot;https://open.buffer.com/state-remote-work-2018/&quot;&gt;their blog&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;My review on Cal Newport&apos;s book &quot;Deep Work&quot; on &lt;a href=&quot;/2018/01/14/deep-work-book-productivity&quot;&gt;this blog&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><author>Fotis</author></item><item><title>How to downsize a root EBS volume on AWS</title><link>https://falexandrou.dev/posts/2018-02-14-downsizing-root-ebs-volume</link><guid isPermaLink="true">https://falexandrou.dev/posts/2018-02-14-downsizing-root-ebs-volume</guid><description>How to decrease the size of a root EBS volume on an AWS instance</description><pubDate>Wed, 14 Feb 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When it comes to increasing the size of an EBS volume, AWS provides clear options and &lt;a href=&quot;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ebs-modify-volume.html&quot;&gt;documentation&lt;/a&gt; for that. But what happens when you were optimistic about the usage of your root EBS volume or you&apos;ve now moved most of your data to S3, Glacier or EFS? AWS doesn&apos;t give you the option to downsize your root EBS volume but luckily there&apos;s a workaround; we can just replicate our (large) volume into a smaller one, and use that as a replacement. This post is a thorough break down of all the necessary steps you need to take in order to do that:&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
&lt;p&gt;Before you even think about starting, &lt;strong&gt;get a snapshot of the volume&lt;/strong&gt; you want to downsize.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure you read the first step and &lt;strong&gt;get a snapshot of the volume&lt;/strong&gt; you want to downsize (I think it&apos;s clear now that you need to backup everything and get a snapshot :P).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Stop the instance that the volume is attached to&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Detach the volume from the instance. For the sake of this how-to, we&apos;re gonna call this the &quot;old volume&quot;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Create an EBS volume of the desired size, and we&apos;re gonna refer to this as the &quot;new volume&quot;. In the image, you can see I have the 500GB volume I need to downsize to 80GB.&lt;/p&gt;
&lt;p&gt;&amp;lt;div class=&quot;image fit&quot;&amp;gt;&amp;lt;img src=&quot;/images/ebs-1.jpg&quot; alt=&quot;EBS Volume downsize - Create the new volume&quot; /&amp;gt;&amp;lt;/div&amp;gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Launch a new EC2 instance. A t2.micro or even a t2.nano would do, and make sure have the right SSH keypair, so you can connect to the instance, which we&apos;ll call the &quot;new instance&quot; from now on.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Connect to the new instance via SSH.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;By using the AWS Console, attach the volumes to the new instance as follows:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;The old volume as &lt;code&gt;/dev/sdf&lt;/code&gt; which in the new instance will become &lt;code&gt;/dev/xvdf&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;The new volume as &lt;code&gt;/dev/sdg&lt;/code&gt; which in the new instance will become &lt;code&gt;/dev/xvdg&lt;/code&gt;&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;&amp;lt;div class=&quot;image fit&quot;&amp;gt;&amp;lt;img src=&quot;/images/ebs-2.jpg&quot; alt=&quot;EBS Volume downsize - Attach the volumes to the new instance&quot; /&amp;gt;&amp;lt;/div&amp;gt;&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Make sure the file system you&apos;re trying to resize is in order by running &lt;code&gt;sudo e2fsck -f /dev/xvdf1&lt;/code&gt;. If you&apos;re resizing a different partition on the drive, change the number 1 to the partition number you wish to resize.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If any errors came up via the &lt;code&gt;e2fsck&lt;/code&gt; command, head over to &lt;a href=&quot;https://linux.101hacks.com/unix/e2fsck/&quot;&gt;this page&lt;/a&gt; to get the right command for the fix. Don&apos;t panic :)&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;We now need to shrink the filesystem to its lowest possible. This process is key for us because we&apos;re gonna use &lt;code&gt;dd&lt;/code&gt; to copy the contents of the old volume bit by bit. Run:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo resize2fs -M -p /dev/xvdf1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;(or change the &quot;1&quot; to your corresponding partition) and make a note of the last line it will print. It will take some time, but the last line would eventually look something like:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;The filesystem on /dev/xvdf1 is now 28382183 (4k) blocks long.
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Convert the number of 4k blocks that the &lt;code&gt;resize2fs&lt;/code&gt; command printed into MB and round it up a bit, for example &lt;strong&gt;28382183 * 4 / 1024 ~= 110867&lt;/strong&gt;, so round it up at &lt;strong&gt;115000&lt;/strong&gt;.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Copy the entire old volume device (not just the partition) to the new volume device, bit by bit so that we&apos;re certain we have both the partition table &amp;amp; data in the boot partition:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;sudo dd if=/dev/xvdf of=/dev/xvdg bs=1M count=110867
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This is going to take a few minutes, you might as well make some coffee &amp;amp; read a book in the meantime.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;You might think (as I did) that we&apos;re done, but we&apos;re not. You need to follow the next steps in order for your volume to be bootable. Otherwise you&apos;d waste time trying to boot from this volume (as I did). The important thing here, is to use &lt;code&gt;gdisk&lt;/code&gt; in order to create create the partition table:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Fire up &lt;code&gt;gdisk&lt;/code&gt; in order to start fixing the GPT on the new volume&lt;pre&gt;&lt;code&gt;sudo gdisk /dev/xvdg
&lt;/code&gt;&lt;/pre&gt;
You&apos;ll get a greeting message and you&apos;ll be navigating to your menus by entering letters from now on. You can hit &lt;code&gt;?&lt;/code&gt; if you require help at any point.&lt;/li&gt;
&lt;li&gt;Hit &lt;code&gt;x&lt;/code&gt; to go to extra expert options&lt;/li&gt;
&lt;li&gt;Hit &lt;code&gt;e&lt;/code&gt; to relocate backup data structures to the end of the disk, then hit &lt;code&gt;m&lt;/code&gt; to go back to the main menu&lt;/li&gt;
&lt;li&gt;Hit &lt;code&gt;i&lt;/code&gt; to get the information of a partition, then &lt;code&gt;1&lt;/code&gt; (the number one) to get the information for the first partition on the device&lt;/li&gt;
&lt;li&gt;It would look something like that:&lt;pre&gt;&lt;code&gt;Partition GUID code: 0FC63DAF-8483-4772-8E79-3D69D8477DE4 (Linux filesystem)
Partition unique GUID: DBA66894-D218-4D7E-A33E-A9EC9BF045DB
First sector: 4096 (at 2.0 MiB)
Last sector: 1677718200 (at 80.0 GiB)
Partition size: 1677308700 sectors (80.0 GiB)
Attribute flags: 0000000000000000
Partition name: &apos;Linux&apos;
&lt;/code&gt;&lt;/pre&gt;
Copy the GUID under the &lt;code&gt;Partition unique GUID&lt;/code&gt; label, (eg. &lt;code&gt;DBA66894-D218-4D7E-A33E-A9EC9BF045DB&lt;/code&gt;) and the &lt;code&gt;Partition Name&lt;/code&gt; (eg. &lt;code&gt;Linux&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Hit &lt;code&gt;d&lt;/code&gt; and then &lt;code&gt;1&lt;/code&gt; (the number one) to delete the partition, followed by &lt;code&gt;n&lt;/code&gt; and &lt;code&gt;1&lt;/code&gt; in order to create a new partition on the device&lt;/li&gt;
&lt;li&gt;You&apos;ll be asked what your first sector would be, add &lt;code&gt;4096&lt;/code&gt;, then follow the defaults (let it allocate the rest of the disk), then add &lt;code&gt;8300&lt;/code&gt; as the type (Linux Filesystem)&lt;/li&gt;
&lt;li&gt;Change the partition&apos;s name to match the information you&apos;ve printed before by hitting &lt;code&gt;c&lt;/code&gt;, then &lt;code&gt;1&lt;/code&gt; (for the first partition) and then add the name that the partition previously had (in our example &lt;code&gt;Linux&lt;/code&gt;)&lt;/li&gt;
&lt;li&gt;Next, change the partition&apos;s GUID by hitting &lt;code&gt;x&lt;/code&gt; (to go to the expert menu), &lt;code&gt;c&lt;/code&gt;, then &lt;code&gt;1&lt;/code&gt; (for the first partition), then add the &lt;code&gt;Partition unique GUID&lt;/code&gt; that the partition previously had (in our example &lt;code&gt;DBA66894-D218-4D7E-A33E-A9EC9BF045DB&lt;/code&gt;).&lt;/li&gt;
&lt;li&gt;We&apos;re almost done: go back to main menu by hitting &lt;code&gt;m&lt;/code&gt;, then &lt;code&gt;i&lt;/code&gt; and then &lt;code&gt;1&lt;/code&gt;. You should get something like what was printed before except now the &lt;code&gt;Partition size&lt;/code&gt; should differ. If the Partition unique GUID or the Partition name are different, hit &lt;code&gt;q&lt;/code&gt; and start over.&lt;/li&gt;
&lt;li&gt;If everything&apos;s set, hit &lt;code&gt;w&lt;/code&gt; in order to write the partition table to the disk, &lt;code&gt;y&lt;/code&gt; for confirmation and you&apos;re (finally) done!&lt;/li&gt;
&lt;li&gt;Now expand your file system because we&apos;ve shrunk it on step 11 by running:&lt;pre&gt;&lt;code&gt;sudo resize2fs -p /dev/xvdg1
&lt;/code&gt;&lt;/pre&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Done! Detach both volumes, create a snapshot of the new volume and attach it to the old instance as &lt;code&gt;/dev/xvda&lt;/code&gt; (root volume).&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;Keep the old volume around for some time, and only delete it after you&apos;re 100% certain that everything is in place. The new instance can be safely terminated though, once your old instance boots up with the new volume.&lt;/p&gt;
&lt;/li&gt;
&lt;/ol&gt;
&lt;h4&gt;Bonus (panic) points&lt;/h4&gt;
&lt;blockquote&gt;
&lt;p&gt;If you are dealing with PVM image and encounter following &lt;a href=&quot;https://forums.aws.amazon.com/thread.jspa?threadID=101969&quot;&gt;mount error&lt;/a&gt; in instance logs&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Kernel panic - not syncing: VFS: Unable to mount root
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;when your instance doesn&apos;t pass startup checks, you may probably be required to perform this additional step.
The solution to this error would be to choose proper Kernel ID for your PVM image during image creation from your snapshot. The full list of Kernel IDs (AKIs) can be obtained &lt;a href=&quot;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/UserProvidedKernels.html#HVM_instances&quot;&gt;here&lt;/a&gt;.
Do choose proper AKI for your image, they are restricted by regions and architectures!&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;(from &lt;a href=&quot;https://stackoverflow.com/questions/31245637/why-does-ec2-instance-not-start-correctly-after-resizing-root-ebs-volume&quot;&gt;StackOverflow&lt;/a&gt;&lt;/p&gt;
&lt;h4&gt;Credits&lt;/h4&gt;
&lt;ol&gt;
&lt;li&gt;&lt;a href=&quot;https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/&quot;&gt;https://matt.berther.io/2015/02/03/how-to-resize-aws-ec2-ebs-volumes/&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://medium.com/@andtrott/how-to-downsize-a-root-ebs-volume-on-aws-ec2-amazon-linux-727c00148f61&quot;&gt;https://medium.com/@andtrott/how-to-downsize-a-root-ebs-volume-on-aws-ec2-amazon-linux-727c00148f61&lt;/a&gt;&lt;/li&gt;
&lt;li&gt;&lt;a href=&quot;https://stackoverflow.com/questions/31245637/why-does-ec2-instance-not-start-correctly-after-resizing-root-ebs-volume&quot;&gt;https://stackoverflow.com/questions/31245637/why-does-ec2-instance-not-start-correctly-after-resizing-root-ebs-volume&lt;/a&gt;&lt;/li&gt;
&lt;/ol&gt;
</content:encoded><author>Fotis</author></item><item><title>How to (selectively) run multiple processes with Foreman</title><link>https://falexandrou.dev/posts/2018-02-02-foreman-multiple-processes</link><guid isPermaLink="true">https://falexandrou.dev/posts/2018-02-02-foreman-multiple-processes</guid><description>Quick tip on using Foreman to run your processes</description><pubDate>Fri, 02 Feb 2018 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you&apos;re using the &lt;a href=&quot;http://ddollar.github.io/foreman/&quot;&gt;foreman gem&lt;/a&gt; to manage simple Procfile process setups, there will be a time where you&apos;ll just need to run 2 or 3 processes separately.
This can be easily achieved if you pass the &quot;formation&quot; parameter or the (undocumented) concurrency parameter to foreman start. So let&apos;s say your Procfile is formed as follows&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;api: bundle exec rails server -e development
listener: bundle exec rake listener:run
workers: bundle exec rake workers:run
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and you only need to run listener &amp;amp; workers, you&apos;ll have to start foreman as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;foreman start -c listener=1,workers=1
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;where the number stands for the number of the processes you want to start for this service. So, if for example we wish to start 5 processes for the workers, we can simply change this to:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;foreman start -c listener=1,workers=5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Additionally, we can use an environment file:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;foreman -e env.sh start -c listener=1,workers=5
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;For more options please visit the &lt;a href=&quot;http://ddollar.github.io/foreman&quot;&gt;foreman documentation page&lt;/a&gt;&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Deep Work by Cal Newport: The most impactful productivity read so far</title><link>https://falexandrou.dev/posts/2018-01-14-deep-work-book-productivity</link><guid isPermaLink="true">https://falexandrou.dev/posts/2018-01-14-deep-work-book-productivity</guid><description>My review on Cal Newport&apos;s &quot;Deep Work&quot;</description><pubDate>Sun, 14 Jan 2018 00:00:00 GMT</pubDate><content:encoded>&lt;h3&gt;A bit of a background&lt;/h3&gt;
&lt;p&gt;For over a decade now, along with my technical skills, I&apos;ve been trying to make sure that an 8-hour work day is as productive and efficient as it gets. I&apos;ve had pretty strict rituals, I&apos;ve been working in an office that&apos;s located close, but not within my home, kept todo lists, planned ahead, all evolved around the same principle: being able to produce impactful work within a restricted time frame, without having to work the long hours, in fact, when I had to work the long hours this meant that I wasn&apos;t actually as productive as I should be (or some really urgent matter required my attention - ie. some critical bug or some system failure). All of this has worked just fine throughout the years and I&apos;m pretty happy with my productivity, in fact I&apos;ve become a pretty strong believer that being able to maintain focus and being highly productive is as important as being great at what you do.&lt;/p&gt;
&lt;p&gt;In order to validate my processes, I&apos;ve been reading a lot about how other people structured their work day both working on-site and remote, and once I&apos;ve switched to being fully remote (back in late 2015) I just wanted to take my productivity to a whole new level: Deep Work.&lt;/p&gt;
&lt;p&gt;&amp;lt;img src=&quot;/images/dw.jpg&quot; class=&quot;image&quot; alt=&quot;A good book and some coffee&quot;&amp;gt;&lt;/p&gt;
&lt;h4&gt;Deep Work&lt;/h4&gt;
&lt;p&gt;Deep Work is a state of mind, in which one produces work intensely and at an elite level, completely keeping away from any distraction.
In his book, Newport starts by explaining why Deep Work is valuable and why it helps shaping today&apos;s economy. It debunks quite a few myths about today&apos;s work, such as the open office space, being extremely responsive to communications. He then continues with the rules that apply in order to work deeply and the different approaches to it and since Newport is an academic himself, everything is backed by scientific proof.&lt;/p&gt;
&lt;p&gt;His main hypothesis is that even the most innocent distractions can be proven to be harmful and in fact there is a chapter on how our brains&apos; neurons actually rewire themselves to be even more acceptive to distractions when we start giving into their shinny world, while on the other hand, being able to maintain focus can produce rare and valuable results.&lt;/p&gt;
&lt;p&gt;The book is split into two main chapters; the first one actually introduces the reader into the world of Deep Work and what are the benefits from it. As expected, right after the first few pages, one gets convinced about the merits of this type of cognitive process, and gets seamlessly drawn into the second chapter which provides a few actionable steps. The highlights for me were his thoughts on being able to &quot;Embrace Boredom&quot;, where he explains that trying to avoid boredom is what actually makes our brains more prone to distractions, how being disconnected from work actually works in work&apos;s favor, &quot;Quit Social Media&quot; (you didn&apos;t see that coming, did you?) but the one that puts all the pieces together is, as expected, the last chapter of the book where Newport explains &quot;How to drain the shallows&quot; and how to be able to enter states of Deep Work throughout the work day, simply by using pen and paper.&lt;/p&gt;
&lt;h4&gt;Conclusion&lt;/h4&gt;
&lt;p&gt;Deep Work isn&apos;t a self-help book with a cheesy cover, it neither is a click-bait post on a popular website. This is the product of year-long scientific research on how to achieve more in a distracted economy, broken down by using simple terms and actionable steps. Every knowledge worker should read the book cover to cover, just to get an idea of what they can achieve, if they&apos;re willing to adapt to a few very simple (and very sane) rules. It&apos;s a book that I nevertheless enjoyed and it has definitely broadened my horizons when it comes to producing meaningful work.&lt;/p&gt;
&lt;h4&gt;Bonus points&lt;/h4&gt;
&lt;p&gt;There are quite a few videos of Newport either in TED conferences or the Google Campus. &lt;a href=&quot;https://www.youtube.com/watch?v=qwOdU02SE0w&quot;&gt;Here&lt;/a&gt;&apos;s one that I&apos;ve liked, which is structured around the subject of his other books&lt;/p&gt;
&lt;p&gt;You can get the book from &lt;a href=&quot;http://amzn.to/2DlSt7m&quot;&gt;amazon.com&lt;/a&gt;, &lt;a href=&quot;http://amzn.to/2DpUPFx&quot;&gt;amazon.co.uk&lt;/a&gt; or &lt;a href=&quot;http://amzn.to/2Dq5eBA&quot;&gt;amazon.de&lt;/a&gt;&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Upgrading React Router v4 on an Isomorphic, Redux-powered React web application</title><link>https://falexandrou.dev/posts/2017-10-05-upgrading-react-router</link><guid isPermaLink="true">https://falexandrou.dev/posts/2017-10-05-upgrading-react-router</guid><description>Upgrading the React Router the easy way</description><pubDate>Thu, 05 Oct 2017 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;When working on the next release of ezploy.io (currently moved to &lt;a href=&quot;https://stackmate.io&quot;&gt;stackmate.io&lt;/a&gt;), among the many changes I&apos;ve introduced, I had to upgrade react-router to its latest stable version v4. At first I thought it was a bit of a luxury since I tried to roll out this version for a long time, but then again I thought that it would provide great benefits in terms of security, stability, flexibility and speed, so I went ahead and spent that 3-4 hours anyway; here&apos;s the story behind that.&lt;/p&gt;
&lt;h4&gt;First things first&lt;/h4&gt;
&lt;p&gt;You have to carefully read the &lt;a href=&quot;https://github.com/ReactTraining/react-router/blob/master/packages/react-router/docs/guides/migrating.md&quot;&gt;Official migration guide&lt;/a&gt; that the awesome contributors of &lt;a href=&quot;https://github.com/ReactTraining/react-router&quot;&gt;React Router&lt;/a&gt; have crafted.&lt;/p&gt;
&lt;h4&gt;Key Concepts&lt;/h4&gt;
&lt;p&gt;The first thing you need to know is that the &lt;code&gt;react-router&lt;/code&gt; is broken down to several packages that you need to additionally install, since the &lt;code&gt;react-router&lt;/code&gt; package is designed as a core package, working both with React and React Native. This is the main reason why you need to familiarize yourself with the &lt;code&gt;react-router-dom&lt;/code&gt; package, which you&apos;ll be using from now on.&lt;/p&gt;
&lt;p&gt;Second, there&apos;s no need for having all of your Routes in one file. The routes can now be hosted inside components so you need to refactor your main application&apos;s component and your existing routes file; Your main application component should host all the top-level routes from now on and your (existing) routes file should now feature an array of routes (more on that on the next paragraph).&lt;/p&gt;
&lt;p&gt;Third, if you&apos;re using &lt;code&gt;onEnter&lt;/code&gt;, &lt;code&gt;onChange&lt;/code&gt; and other hooks like &lt;code&gt;setRouteLeaveHook&lt;/code&gt;, you may need to perform a deeper dive on the &lt;a href=&quot;https://reacttraining.com/react-router/web/guides/philosophy&quot;&gt;React Router documentation&lt;/a&gt;, and perhaps a quick look at &lt;a href=&quot;https://github.com/ReactTraining/react-router/issues/3854&quot;&gt;this thread&lt;/a&gt; too, as these have now been removed. There&apos;s a section at the end of this post explaining what you need to do if you require user confirmation when navigating away from a page (ie. unsaved changes etc).&lt;/p&gt;
&lt;p&gt;Fourth, if you&apos;re passing &lt;code&gt;params&lt;/code&gt; in your data prefetching functions or examining &lt;code&gt;params&lt;/code&gt; in your props, keep in mind that &lt;code&gt;params&lt;/code&gt; is now a property of &lt;code&gt;match&lt;/code&gt; which is the router match object (which we&apos;ll talk about in a bit).&lt;/p&gt;
&lt;p&gt;Last but not least you may need to spend some time refactoring your imports, since the &lt;code&gt;&amp;lt;Link&amp;gt;&lt;/code&gt; component, or the &lt;code&gt;withRouter&lt;/code&gt; wrapper for example, are now located in the &lt;code&gt;react-router-dom&lt;/code&gt; package.&lt;/p&gt;
&lt;h4&gt;Isomorphic rendering &amp;amp; data prefetching&lt;/h4&gt;
&lt;p&gt;In order for the server side rendering to work properly we need to have all data pre-fetched and the Redux store hydrated (if you&apos;re using Redux on your stack). This is achieved by having each data-fetching component set up as follows:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;class MyComponent extends React.Component {
  static fetchData(dispatch, match) {
    // ... dispatch the appropriate actions
  }

  componentDidMount() {
    const { dispatch, match } = this.props;
    MyComponent.fetchData(dispatch, match);
    // ...
  }
}
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Notice how we pass &lt;code&gt;match&lt;/code&gt; as the second argument of the &lt;code&gt;fetchData&lt;/code&gt; function, this used to be router&apos;s &lt;code&gt;params&lt;/code&gt; in previous versions, but since &lt;code&gt;params&lt;/code&gt; is now a property of &lt;code&gt;match&lt;/code&gt;, we pass the &lt;code&gt;match&lt;/code&gt; object instead.&lt;/p&gt;
&lt;p&gt;Now, remember when we said we don&apos;t need to have a central place for your routes? That&apos;s sort of true; Not having all your routes in one place, means that each component should be able to declare Routes inside its &lt;code&gt;render&lt;/code&gt; function for example, meaning that isomorphic rendering with data prefetching becomes a lot trickier. Not anymore, because now&apos;s the time to install &lt;code&gt;react-router-config&lt;/code&gt; and setup a fairly big array of objects, containing all the routes in the system (hence the &quot;sort of true&quot; about this argument), much like that:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;module.exports = [
  {
    path: &apos;/&apos;,
    component: Application,
    routes: [
      {
        path: &apos;/dashboard&apos;,
        component: Dashboard,
      },
      // ... more top-level routes here
      // ...
      // a couple of nested routes
      {
        path: &apos;/projects/:id/setup/update&apos;,
        component: UpdateProject,
      },
      {
        path: &apos;/projects/:id/setup&apos;,
        component: Project,
      },
    ]
  }
]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Having done that, you may now use &lt;code&gt;renderRoutes&lt;/code&gt; and &lt;code&gt;matchRoutes&lt;/code&gt; on your server side router. You&apos;re going to match the route based on the url the user is currently at, then get the components that this route uses, apply the &lt;code&gt;fetchData&lt;/code&gt; function and done!&lt;/p&gt;
&lt;p&gt;The most common pattern for doing server side rendering, is having a middleware (in our case an Express.js middleware) which renders on all urls (catch-all) and delegates the actual routing to react-router.&lt;/p&gt;
&lt;p&gt;If what I&apos;ve mentioned above sounds familiar to you, your existing code (pre-upgrade) probably uses &lt;code&gt;matchPath&lt;/code&gt; function, like this one:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;matchPath(req.url, routes).then((error, redirectLocation, renderProps) =&amp;gt; {
  // I can has server side rendering in here
});
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;It&apos;s now time for that &lt;code&gt;matchPath&lt;/code&gt; function to retire and match your routes with the &lt;code&gt;matchRoutes&lt;/code&gt; function, provided by &lt;code&gt;react-router-config&lt;/code&gt;. Here&apos;s the gist of how my server side router looks after applying the changes (make sure you follow the comments in the code):&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;// ...
// import the new StaticRouter from react-router-dom and the functions described above from the config package
import { StaticRouter } from &apos;react-router-dom&apos;;
import { matchRoutes, renderRoutes } from &apos;react-router-config&apos;;

module.exports = () =&amp;gt; {
  // you might need to avoid using `import` for the routes file,
  // if you&apos;re doing hot reloading on the server and you need them to be reloaded
  let routes = require(&apos;./routes&apos;);

  // catch-all middleware that delegates routing to react-router
  router.use(&apos;*&apos;, (req, res, next) =&amp;gt; {
    // ...
    const context = {};

    // We&apos;re matching the route with `matchRoutes`, then we&apos;re adding all of the `fetchData` promises in an Array.
    const dataPromises = matchRoutes(routes, req.originalUrl).map( ({ route, match }) =&amp;gt; {
      return route.component.fetchData ? route.component.fetchData(store.dispatch, match) : Promise.resolve(null);
    });

    // Once all of the `fetchData` promises have been resolved, we may now proceed with the rendering
    Promise.all(dataPromises).then( prefetchData =&amp;gt; {
      // Create a component wrapped in a `&amp;lt;Provider&amp;gt;` containing the store,
      // then render the router inside the provider
      const InitialComponent = &amp;lt;Provider store={store}&amp;gt;
        &amp;lt;StaticRouter location={req.url} context={context}&amp;gt;
          {renderRoutes(routes)}
        &amp;lt;/StaticRouter&amp;gt;
      &amp;lt;/Provider&amp;gt;

      // Render your Express.js layout with the app and the Redux store hydrated
      res.render(&apos;application&apos;, {
        reactApp: ReactDOM.renderToString(InitialComponent),
        initialState: JSON.stringify(store.getState()).replace(/&amp;lt;/g, &apos;\\u003c&apos;),
      });
    })
    // ...
  });
};
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;Done! Your app now renders isomorphically, with all the data pre-fetched and the Redux store hydrated. We&apos;re not entirely done though, let&apos;s just make sure that our client app is up to date, here&apos;s the gist again:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import { BrowserRouter as Router } from &apos;react-router-dom&apos;;
import { renderRoutes } from &apos;react-router-config&apos;;

// Import the same long array containing all the routes or just render your main application component here
// in case you find this too much.
import routes from &quot;routes&quot;;

// ...
const InitialComponent = (
  &amp;lt;Provider store={store}&amp;gt;
    &amp;lt;Router&amp;gt;
      { renderRoutes(routes) }
    &amp;lt;/Router&amp;gt;
  &amp;lt;/Provider&amp;gt;
);

ReactDOM.render(InitialComponent, document.getElementById(&quot;app&quot;));
// ...
&lt;/code&gt;&lt;/pre&gt;
&lt;h4&gt;Additional things to consider:&lt;/h4&gt;
&lt;ul&gt;
&lt;li&gt;When using nested routes like for example &lt;code&gt;/projects/10&lt;/code&gt; and then &lt;code&gt;/projects/10/member/1&lt;/code&gt;, you may need to mark the first one as &lt;code&gt;exact&lt;/code&gt;, otherwise you&apos;ll end up resolving unexpected components&lt;/li&gt;
&lt;li&gt;On the same subject, let&apos;s say we have the following route set up&lt;/li&gt;
&lt;/ul&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;Route to=&quot;/projects/:project_id&quot; component={Project} /&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;and inside the &lt;code&gt;Project&lt;/code&gt; component you have the following route set up:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;  // ...
  &amp;lt;Switch&amp;gt;
    // ...
    &amp;lt;Route to=&quot;/projects/:project_id/collaborators/:collaborator_id&quot; component={ProjectCollaborator} /&amp;gt;
  &amp;lt;/Switch&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;When visiting &lt;code&gt;/projects/5/collaborators/15&lt;/code&gt;, you may not be able to access the &lt;code&gt;:component_id&lt;/code&gt; param inside the &lt;code&gt;Project&lt;/code&gt; component due to &lt;a href=&quot;https://github.com/ReactTraining/react-router/issues/5429&quot;&gt;this issue&lt;/a&gt;.&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;p&gt;If you&apos;ve been using the &lt;code&gt;setRouteLeaveHook&lt;/code&gt; hook to prompt your users before navigating away from a page, you can use the &lt;a href=&quot;https://reacttraining.com/react-router/web/api/BrowserRouter/getUserConfirmation-func&quot;&gt;getUserConfirmation&lt;/a&gt;. Now, if you need a bit more granularity prompting your users when navigating away from a page, consider using the &lt;a href=&quot;https://reacttraining.com/react-router/web/api/Prompt&quot;&gt;Prompt&lt;/a&gt; component, while if you need to render a custom React component or perform a custom hook when confirming / canceling navigation, there&apos;s a &lt;a href=&quot;https://github.com/ZacharyRSmith/react-router-navigation-prompt&quot;&gt;nice replacement&lt;/a&gt; for that.&lt;/p&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;p&gt;If you&apos;re using &lt;code&gt;react-router-redux&lt;/code&gt; in order to navigate like &lt;code&gt;dispatch(push(&apos;/my-url&apos;))&lt;/code&gt;, you need to upgrade to its &lt;code&gt;next&lt;/code&gt; version, as the project has been moved as well. They also provide a comprehensive example in &lt;a href=&quot;https://github.com/ReactTraining/react-router/tree/master/packages/react-router-redux&quot;&gt;their documentation&lt;/a&gt; which features the &lt;code&gt;ConnectedRouter&lt;/code&gt; component in order to automatically connect to the store object.&lt;/p&gt;
&lt;/li&gt;
&lt;/ul&gt;
</content:encoded><author>Fotis</author></item><item><title>Django Snippets for everyday problems</title><link>https://falexandrou.dev/posts/2016-08-17-django-snippets-to-help-ease-the-pain</link><guid isPermaLink="true">https://falexandrou.dev/posts/2016-08-17-django-snippets-to-help-ease-the-pain</guid><description>Love Django? So do I. Here&apos;s a list of snippets I&apos;ve been collecting to improve my day to day.</description><pubDate>Wed, 17 Aug 2016 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In late 2015 I found myself working with Python, Django and py.test. I was trying to apply some practices that I had been applying for a very long time with different tools, but Django resisted, so here&apos;s the survival kit I had while I was struggling not to compare Django with more modern frameworks. I&apos;m sure a more experienced Django engineer would have found more elegant solutions but these actually did the trick for me.&lt;/p&gt;
&lt;h3&gt;1. HTML arrays in POST requests&lt;/h3&gt;
&lt;p&gt;Coming from a background that HTML arrays (or Hashes) are posted in forms can cause some pain in Django. If for example you&apos;re coming from a Ruby on Rails or PHP background, you may have found easy to access HTML arrays by simply checking the &lt;code&gt;request&lt;/code&gt; object in Rails or &lt;code&gt;$_POST&lt;/code&gt; array in PHP. So, for example if you need to post the following form:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;&amp;lt;input type=&quot;text&quot; name=&quot;person[name]&quot; value=&quot;John&quot;&amp;gt;
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;In Rails for example, by accessing &lt;code&gt;request[:person][:name]&lt;/code&gt; you would get the expected result, while in Django this is not the case. In order to do that, the following snippet is what you need&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;import re
def get_dict_array(post, key):
    &quot;&quot;&quot;
    Get an entry from an HTML array eg:
    &amp;lt;input type=&quot;text&quot; name=&quot;person[name]&quot; value=&quot;John&quot;&amp;gt;
    Usage:
    get_dict_array(request.POST, &quot;person&quot;)
    &quot;&quot;&quot;
    result = {}
    if post:
        patt = re.compile(&apos;^([a-zA-Z_]\w+)\[([a-zA-Z_\-0-9][\w\-]*)\]$&apos;)
        for post_name, value in post.items():
            value = post[post_name]
            match = patt.match(post_name)
            if not match or not value:
                continue
            name = match.group(1)
            if name == key:
                k = match.group(2)
                result.update({k:value})
    return result
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;2. Error message overloading for Unique composite keys&lt;/h3&gt;
&lt;p&gt;Let&apos;s say in your database, there is a table with a composite key, for example in a table of users&apos; Portfolio Items, the fields &lt;code&gt;user&lt;/code&gt; and &lt;code&gt;url&lt;/code&gt; should be unique together. In case you need to customise the error message for when a user enters an item which already exists, then you&apos;re in for a surprise: You need to overload the model&apos;s &lt;code&gt;unique_error_message&lt;/code&gt; method. Sounds dangerous? Probably because it is...&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;def unique_error_message(self, model_class, unique_check):
        if model_class == type(self) and unique_check == (&apos;user&apos;, &apos;url&apos;):
            return _(&quot;There already is a portfolio item with the specific url&quot;)
        else:
            return super(MyModel, self).unique_error_message(model_class, unique_check)
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;3. Not able to chain scopes (use QuerySet instead of ModelManager and &lt;code&gt;objects = QuerySet.as_manager&lt;/code&gt;)&lt;/h3&gt;
&lt;p&gt;If you have created a model in django and you want to set a few scopes for it, you might need to use a &lt;code&gt;ModelManager&lt;/code&gt;, right?
Well, sort of... You see, chaining &lt;code&gt;ModelManager&lt;/code&gt; objects can be painful, so if for example you need to have&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;Users.objects.active().social_user().all()
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;you might get a few not-so-clear error.&lt;/p&gt;
&lt;p&gt;The most solid approach I&apos;ve found, is the following:&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;Declare a &lt;code&gt;QuerySet&lt;/code&gt; instead of &lt;code&gt;ModelManager&lt;/code&gt;&lt;/li&gt;
&lt;li&gt;In your model, you can set &lt;code&gt;objects = QuerySet.as_manager()&lt;/code&gt; to use your custom objects manager.&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;Example:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;from django.db import models

class AccountQuerySet(models.QuerySet):
    # ... other scopes here ...

    def active():
      &quot;&quot;&quot;Filter accounts by active status&quot;&quot;&quot;
      return self.filter(is_active=True)

class Account(models.Model):
    # ... fields go here ...
    
    objects = AccountQuerySet.as_manager()
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;4. Converting &lt;code&gt;QueryDict&lt;/code&gt; to plain &lt;code&gt;dict&lt;/code&gt;&lt;/h3&gt;
&lt;p&gt;Casting a &lt;code&gt;QueryDict&lt;/code&gt; to a plain &lt;code&gt;dict&lt;/code&gt; might sound like a trivial thing, but you need to be aware of the following caveat: &lt;code&gt;QueryDict.dict()&lt;/code&gt; and &lt;code&gt;dict(QueryDict...)&lt;/code&gt; return different things, as shown in the output below&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;In [3]: QueryDict(&quot;utm_source=email_campaign&quot;).dict()
Out[3]: {u&apos;utm_source&apos;: u&apos;email_campaign&apos;}

In [4]: dict(QueryDict(&quot;utm_source=email_campaign&quot;))
Out[4]: {u&apos;utm_source&apos;: [u&apos;email_campaign&apos;]}
&lt;/code&gt;&lt;/pre&gt;
&lt;h3&gt;5. Package seems missing even though you have just installed&lt;/h3&gt;
&lt;p&gt;You have just installed a package, for example &lt;code&gt;boto&lt;/code&gt; and you want to run a command, for example &lt;code&gt;fab&lt;/code&gt;.
If you receive the following error:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;ImportError: No module named boto
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;all you have to do is:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;export PYTHONPATH=/usr/local/lib/python2.7/site-packages
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;or in order to make the change permanent, you need to add this line to your shell&apos;s &lt;code&gt;rc&lt;/code&gt; file like &lt;code&gt;.bashrc&lt;/code&gt; or &lt;code&gt;.zshrc&lt;/code&gt;.&lt;/p&gt;
&lt;h3&gt;6. Uninstall all python packages&lt;/h3&gt;
&lt;p&gt;Want to start fresh or for some reason remove every python package in your system? There you go:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;pip freeze | xargs pip uninstall -y
&lt;/code&gt;&lt;/pre&gt;
</content:encoded><author>Fotis</author></item><item><title>Git 101 - Workshop on Found.ation co-working space</title><link>https://falexandrou.dev/posts/2015-03-25-git-training-in-foundation</link><guid isPermaLink="true">https://falexandrou.dev/posts/2015-03-25-git-training-in-foundation</guid><description>Workshop on Found.ation co-working space about Git</description><pubDate>Wed, 25 Mar 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;In March 2015, &lt;a href=&quot;http://thefoundation.gr/&quot;&gt;Found.ation&lt;/a&gt; invited me to host 2 workshops for how to use Git. The workshops were of great success and I was very happy that people would not also start using Git, but prefer it over other SCM tools. The event was sold out both times, which made me a bit nervous at start but it proved to be a great experience, hopefully for the trainees as well. At the end of the workshop, we gave away 3 GitHub coupons for 6 months, offered by &lt;a href=&quot;https://github.com/&quot;&gt;GitHub&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;You can find the material I&apos;ve prepared on &lt;a href=&quot;http://gitcompanion.github.io/&quot;&gt;gitcompanion.github.io&lt;/a&gt; and the event page and some photos on &lt;a href=&quot;http://thefoundation.gr/events/educ-ation-class-git-training-101/&quot;&gt;Found.ation&apos;s website&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;&amp;lt;div class=&quot;w-full&quot;&amp;gt;
&amp;lt;img src=&quot;/images/git-1.jpg&quot; class=&quot;image&quot; alt=&quot;Training in progress&quot;&amp;gt;
&amp;lt;img src=&quot;/images/git-2.jpg&quot; class=&quot;image&quot; alt=&quot;Training in progress&quot;&amp;gt;
&amp;lt;/div&amp;gt;&lt;/p&gt;
&lt;p&gt;Images courtesy of [Found.ation](http://thefoun dation.gr/events/educ-ation-class-git-training-101/)&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Hacking your workflow - FrontMass 2014 Keynote</title><link>https://falexandrou.dev/posts/2015-01-10-frontmass-2014-my-keynote</link><guid isPermaLink="true">https://falexandrou.dev/posts/2015-01-10-frontmass-2014-my-keynote</guid><description>My talk on Developer Productivity on JoomlaDay 2014</description><pubDate>Sun, 11 Jan 2015 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;Right before Christmas 2014, the &lt;a href=&quot;http://skgtech.io&quot;&gt;SKGtech&lt;/a&gt; team held a great conference in Thessaloniki, called &lt;a href=&quot;http://frontmass.org&quot;&gt;FrontMass&lt;/a&gt;. I was lucky enough to be one of the keynote speakers, covering my favorite topic: How to hack your workflow for better productivity.&lt;/p&gt;
&lt;p&gt;My main points were how to effectively use tooling such as &lt;code&gt;vagrant&lt;/code&gt;, &lt;code&gt;git&lt;/code&gt; and &lt;code&gt;docker&lt;/code&gt; to boost your productivity.&lt;/p&gt;
&lt;p&gt;&amp;lt;div class=&quot;aspect-4/3 w-full mb-10&quot;&amp;gt;
&amp;lt;iframe
src=&quot;/frontmass14/index.html&quot;
frameborder=&quot;0&quot;
width=&quot;100%&quot;
height=&quot;100%&quot;
&amp;gt;&amp;lt;/iframe&amp;gt;&lt;br /&gt;
&amp;lt;/div&amp;gt;&lt;/p&gt;
&lt;p&gt;&amp;lt;div class=&quot;w-full&quot;&amp;gt;
&amp;lt;img src=&quot;/images/fm1.jpg&quot; class=&quot;image&quot; alt=&quot;FrontMASS 2014&quot;&amp;gt;
&amp;lt;img src=&quot;/images/fm2.jpg&quot; class=&quot;image&quot; alt=&quot;FrontMASS 2014&quot;&amp;gt;
&amp;lt;img src=&quot;/images/fm3.jpg&quot; class=&quot;image&quot; alt=&quot;FrontMASS 2014&quot;&amp;gt;
&amp;lt;img src=&quot;/images/fm4.jpg&quot; class=&quot;image&quot; alt=&quot;FrontMASS 2014&quot;&amp;gt;
&amp;lt;/div&amp;gt;&lt;/p&gt;
&lt;p&gt;Images courtesy of &lt;a href=&quot;https://www.flickr.com/photos/christosbacharakis/sets/72157649825030806/&quot;&gt;Christos Bacharakis&lt;/a&gt;&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Switching to Jekyll GitHub pages</title><link>https://falexandrou.dev/posts/2014-10-26-switching-to-github-pages</link><guid isPermaLink="true">https://falexandrou.dev/posts/2014-10-26-switching-to-github-pages</guid><description>How I built this blog and made it super-fast and easy to maintain.</description><pubDate>Sun, 26 Oct 2014 00:00:00 GMT</pubDate><content:encoded>&lt;h2&gt;Goal: An easy to maintain personal website&lt;/h2&gt;
&lt;p&gt;Being a Web Developer is complicated by itself, so, when it comes to maintaining a personal website about ourselves, most of us are terrible at it. We either envy the fancy-designed websites by hipster designers or we enjoy the simplicity of typography-based designs by UX experts.&lt;/p&gt;
&lt;p&gt;But, at the end of the day it all comes down to this; maintaining a personal website should be something simple, so it can be done on a regular basis, right?&lt;/p&gt;
&lt;h2&gt;Abstracting information&lt;/h2&gt;
&lt;p&gt;When I decided to re-build my personal website (I take such a decision once every two years or so), I thought I should include all the stuff that all the cool kids do; Portfolio, Client testimonials, Contact form, Pictures of me traveling, Pictures of me giving talks like a rock-star in front of 3 people etc.&lt;/p&gt;
&lt;p&gt;&lt;strong&gt;No&lt;/strong&gt;. Here&apos;s the thing: If i do all of this, then I would have to maintain duplicate information, because I would update my social profiles anyway. Also, I&apos;m not an agency (at least not anymore), so my work is not that frequently updated. I&apos;ll add a page with my open source projects in the near future instead.&lt;/p&gt;
&lt;p&gt;Contact forms are easily abused by bots, which led to ~150 very enlightening spam email messages every day. Most of them were about shoes, which is really odd. I would either use &lt;a href=&quot;https://en.wikipedia.org/wiki/CAPTCHA&quot;&gt;CAPTCHA&lt;/a&gt; (sic) or create a public email and accept all kinds of spam there.&lt;/p&gt;
&lt;h2&gt;Keeping it simple&lt;/h2&gt;
&lt;p&gt;I&apos;ve added my social media profiles in my contact page, you may use them freely. Kept my homepage clean with all the posts that are recently posted by me.&lt;/p&gt;
&lt;h2&gt;What was my weapon of choice&lt;/h2&gt;
&lt;p&gt;&lt;a href=&quot;http://jekyllrb.com&quot;&gt;Jekyll&lt;/a&gt; is a minimalistic static site generator which makes you think how isn&apos;t that even more popular. It uses &lt;a href=&quot;http://daringfireball.net/projects/markdown/syntax&quot;&gt;Markdown&lt;/a&gt; and HTML, can be run locally and can be hosted on &lt;a href=&quot;https://pages.github.com&quot;&gt;GitHub pages&lt;/a&gt; (free).&lt;/p&gt;
&lt;h2&gt;Shipping it&lt;/h2&gt;
&lt;p&gt;On a terminal run&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ gem install jekyll
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will install Jekyll as a ruby gem&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ jekyll new my-awesome-website
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;This will create a new website&lt;/p&gt;
&lt;p&gt;Then follow a pretty straight-forward git-based process to publish your website to &lt;a href=&quot;https://pages.github.com&quot;&gt;GitHub pages&lt;/a&gt;.&lt;/p&gt;
&lt;p&gt;The main benefit is that I don&apos;t have to maintain a server, upgrade, care about security updates, running database backups, running a mail server for a mere developer&apos;s blog. I can publish to my blog that easily:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;$ git commit --all .
$ git commit -m &quot;My new post&quot;
$ git push origin master
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;There it is! I just published a new blog post simply by doing a &lt;code&gt;git commit&lt;/code&gt; without the need of logging into a control panel and without even leaving my IDE.&lt;/p&gt;
&lt;p&gt;So, here it is. My new blog, running on &lt;a href=&quot;https://pages.github.com&quot;&gt;GitHub pages&lt;/a&gt;, powered by &lt;a href=&quot;http://jekyllrb.com&quot;&gt;Jekyll&lt;/a&gt;, pure &lt;a href=&quot;http://daringfireball.net/projects/markdown/syntax&quot;&gt;Markdown&lt;/a&gt;, HTML and &lt;a href=&quot;https://en.wikipedia.org/wiki/Emoji&quot;&gt;Emoji&lt;/a&gt;. As a friend says: It&apos;s back to the future!&lt;/p&gt;
&lt;p&gt;Hope you like it :smile: :beer:&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Display PDF in-page without a javascript plugin</title><link>https://falexandrou.dev/posts/2014-03-25-display-pdf-in-page-without-a-javascript-plugin</link><guid isPermaLink="true">https://falexandrou.dev/posts/2014-03-25-display-pdf-in-page-without-a-javascript-plugin</guid><description>Why bother installing a jQuery plugin, when all of thisis built right inside the browser?</description><pubDate>Tue, 25 Mar 2014 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;There are cases where a PDF file is more than a download link and you need to display it inline inside your page. Recently I’ve stumbled upon a conversation where somebody was looking for a good jQuery plugin that displays PDF files. One of the suggestions was the amazing pdf.js which is developed by the Mozilla foundation and it’s pretty much a full blown PDF viewer in your browser.&lt;/p&gt;
&lt;p&gt;If you need a really lightweight solution and you don’t mind if the users with browsers that don’t support PDF viewing, don’t view the document, here’s a snippet you may use&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;/**
 * When browser supports inline pdfs
 * There is no need to append a large jquery plugin that displays them inline.
 *
 * You can actually use an iframe (and style it appropriately)
 * or a link whenever inline PDF viewing is not supported
 *
 * This function is fairly simple and it&apos;s only for demo purposes
 */
function appendPdf(id, url) {
    var $el = $(&apos;#&apos;+id);
    // Check whether the browser supports displaying pdf files inline (ie. without downloading them)
    if (navigator &amp;amp;&amp;amp; navigator.mimeTypes &amp;amp;&amp;amp; navigator.mimeTypes[&apos;application/pdf&apos;]) {
        // You may add extra attributes (eg. to allow transparency) or style the iframe
        $el.html(&apos;&amp;lt;iframe src=&quot;&apos;+url+&apos;&quot;&amp;gt;&amp;lt;/iframe&amp;gt;&apos;);
    } else {
        $el.html(&apos;&amp;lt;a href=&quot;&apos;+url+&apos;&quot;&amp;gt;Download file&amp;lt;/a&amp;gt;&apos;);
    }
 }
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;What we do here is check if the browser has a plugin for a certain mime type (the check&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;navigator.mimeTypes[&apos;application/pdf&apos;]
&lt;/code&gt;&lt;/pre&gt;
&lt;p&gt;will return undefined if the browser doesn’t have a plugin for that mime type). If the browser does support PDF, we append a simple iframe that can be styled and sized accordingly, or append a link to download the file if the browser doesn’t have a handler for PDFs.&lt;/p&gt;
&lt;p&gt;As you can see, it’s a fairly simple solution and significantly lighter than any javascript component.&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>Vagrant Apache or nginx serving corrupt Javascript and CSS files</title><link>https://falexandrou.dev/posts/2014-02-13-vagrant-apache-or-nginx-serving-corrupt-javascript-and-css-files</link><guid isPermaLink="true">https://falexandrou.dev/posts/2014-02-13-vagrant-apache-or-nginx-serving-corrupt-javascript-and-css-files</guid><description>A very common issue with nginx on Vagrant where the static files are served as corrupt</description><pubDate>Wed, 12 Feb 2014 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;If you prefer a virtualized environment for your web development purposes, you may find Vagrant a really handy solution. Vagrant is a fantastic tool that creates a virtual machine which can be provisioned with Chef or Puppet and be re-packaged for future distribution.&lt;/p&gt;
&lt;p&gt;One thing you may need to setup first would be turning off the sendfile option in your web server, otherwise you might end up getting corrupt static files such as javascript or css files. This is actually a VirtualBox bug, as it was documented in Vagrant’s v1.1 documentation and here’s a simple solution to that:&lt;/p&gt;
&lt;pre&gt;&lt;code&gt;# A VirtualBox bug forces vagrant to serve
# corrupt files via Apache or nginx
# The solution to that would be to turn off
# the SendFile option in apache or nginx
#
# If you use apache as your main web server
# add this directive in your httpd.conf (or apache.conf)
# configuration file name may vary in various systems
#
EnableSendfile off

# If you use nginx as your main web server
# add this directive in your nginx.conf
sendfile off
&lt;/code&gt;&lt;/pre&gt;
</content:encoded><author>Fotis</author></item><item><title>Hacking the way you work: My keynote on JoomlaDay 2013</title><link>https://falexandrou.dev/posts/2013-06-17-joomla-day-hacking-your-productivity</link><guid isPermaLink="true">https://falexandrou.dev/posts/2013-06-17-joomla-day-hacking-your-productivity</guid><description>My talk on Developer Productivity on JoomlaDay 2013</description><pubDate>Mon, 17 Jun 2013 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;&amp;lt;div class=&quot;aspect-4/3 w-full mb-10&quot;&amp;gt;
&amp;lt;iframe
src=&quot;//www.slideshare.net/slideshow/embed_code/key/4qTVnDwz4UUeUd&quot;
frameborder=&quot;0&quot;
marginwidth=&quot;0&quot;
marginheight=&quot;0&quot;
scrolling=&quot;no&quot;
width=&quot;100%&quot;
height=&quot;100%&quot;
allowfullscreen&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;&amp;lt;/iframe&amp;gt;
&amp;lt;/div&amp;gt;&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;&amp;lt;div class=&quot;w-full&quot;&amp;gt;
&amp;lt;img src=&quot;/images/jd.jpg&quot; alt=&quot;JoomlaDay 2013&quot; class=&quot;image&quot;&amp;gt;
&amp;lt;/div&amp;gt;&lt;/p&gt;
&lt;p&gt;&amp;lt;small&amp;gt;
Image courtesy of &lt;a href=&quot;https://www.flickr.com/photos/joomladaygreece/9262496162/in/album-72157634594976836/&quot;&gt;JoomlaDay Greece Flickr&lt;/a&gt;
&amp;lt;/small&amp;gt;&lt;/p&gt;
</content:encoded><author>Fotis</author></item><item><title>How GitHub Uses GitHub to Build GitHub</title><link>https://falexandrou.dev/posts/2011-09-22-how-github-uses-github-to-build-github</link><guid isPermaLink="true">https://falexandrou.dev/posts/2011-09-22-how-github-uses-github-to-build-github</guid><description>A deeper dive into GitHub&apos;s toolbox</description><pubDate>Thu, 22 Sep 2011 00:00:00 GMT</pubDate><content:encoded>&lt;p&gt;This is probably the presentation I&apos;ve read the most while going through SpeakerDeck or SlideShare. I love GitHub both as a software product and philosophy as a team, so I try to read as much as i can related to them.&lt;/p&gt;
&lt;p&gt;Here&apos;s a presentation by &lt;a href=&quot;http://zachholman.com&quot;&gt;Zack Holman&lt;/a&gt; on &lt;a href=&quot;http://zachholman.com/talk/how-github-uses-github-to-build-github/&quot;&gt;How GitHub uses GitHub to build GitHub&lt;/a&gt;&lt;/p&gt;
&lt;p&gt;Enjoy!&lt;/p&gt;
&lt;p&gt;&amp;lt;script async class=&quot;speakerdeck-embed&quot; data-id=&quot;4e79b461c9bdcb003f00331d&quot; data-ratio=&quot;1.33333333333333&quot; src=&quot;//speakerdeck.com/assets/embed.js&quot;&amp;gt;&amp;lt;/script&amp;gt;&lt;/p&gt;
</content:encoded><author>Fotis</author></item></channel></rss>