<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
  <channel>
    <title>Ramkumar K R</title>
    <description></description>
    <link>https://ramkumar-kr.github.io/</link>
    <atom:link href="https://ramkumar-kr.github.io/feed.xml" rel="self" type="application/rss+xml"/>
    <pubDate>Sun, 13 Feb 2022 03:14:18 +0000</pubDate>
    <lastBuildDate>Sun, 13 Feb 2022 03:14:18 +0000</lastBuildDate>
    <generator>Jekyll v3.9.0</generator>
    
      <item>
        <title>A silly mistake</title>
        <description>&lt;p&gt;I was working on an assignment whose problem statement was like this -&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;You are writing a program that guides a robot over terrain from a starting location to a destination. The program gets a two-dimensional matrix where the indexes define the x and y coordinates. The elements represent the elevation of the terrain. I was to use different search algorithms such as A*, BFS, and UCS.&lt;/p&gt;
&lt;/blockquote&gt;

&lt;p&gt;I finished my assignment well on time and was also happy with the design. I used the strategy pattern and would choose the relevant algorithm based on the input. I spent days designing the right heuristic for A* search. Though I invested a lot of time and effort into the assignment, I got a meager 22%. It turns out that I read the input incorrectly for reading the elevation (swapped places of X and Y). Here’s the two letter change which would fix my code -&lt;/p&gt;

&lt;div class=&quot;language-diff highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;&lt;span class=&quot;gd&quot;&gt;-        landing.setZ(terrainMap.get(landing.getX()).get(landing.getY()));
&lt;/span&gt;&lt;span class=&quot;gi&quot;&gt;+        landing.setZ(terrainMap.get(landing.getY()).get(landing.getX()));
&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;p&gt;Although I was initially frustrated with myself, I realized the consequences of code which has on people. For me, it was almost harmless, since I lost a few marks. However, code can also affect the lives of a lot of people. This assignment taught me more about the effects of code that have on the world than about the search algorithms. I feel that it is important for every person who writes code to understand its implications on the world and proceed with caution.&lt;/p&gt;

&lt;p&gt;Another aspect is about testing. Although it is not possible to test every possible scenario for all non-trivial code, it shows that there is a need for improving my testing. In my next assignment, I stuck to Test-driven development (TDD) which took a bit of extra time but ensured that I covered almost all major scenarios.&lt;/p&gt;

&lt;p&gt;I end this blog post by a slightly modified quote of Martin golding -&lt;/p&gt;
&lt;blockquote&gt;
  &lt;p&gt;Always code as if the person who ends up using or maintaining your code is a violent psychopath who knows where you live&lt;/p&gt;
&lt;/blockquote&gt;
</description>
        <pubDate>Sat, 28 Dec 2019 22:45:00 +0000</pubDate>
        <link>https://ramkumar-kr.github.io/2019/12/28/a-silly-mistake.html</link>
        <guid isPermaLink="true">https://ramkumar-kr.github.io/2019/12/28/a-silly-mistake.html</guid>
        
        <category>AI</category>
        
        <category>Mistakes</category>
        
        <category>Testing</category>
        
        <category>Learning</category>
        
        
      </item>
    
      <item>
        <title>Optimizing spend on AWS</title>
        <description>&lt;h1 id=&quot;optimizing-spend-on-aws&quot;&gt;Optimizing spend on AWS&lt;/h1&gt;

&lt;p&gt;A large chunk of the internet today is currently hosted on AWS. However, without proper measures, the cost of running infrastructure on AWS can increase dramatically. Taking on the responsibility of DevOps at my company, (which should ideally be called as DevSecFinOps) I was tasked with bringing down our AWS spend. After 2 months, and a region migration, we brought down the cost of our bill down by 50%. This post will talk about the various steps I took in optimizing the spend on AWS.&lt;/p&gt;

&lt;h2 id=&quot;where-am-i-spending-money&quot;&gt;Where am I spending money?&lt;/h2&gt;

&lt;p&gt;We must first identify the areas where money is being spent on AWS. AWS provides three tools of various levels of granularity to identify expensive resources.&lt;/p&gt;

&lt;h3 id=&quot;aws-monthly-bill&quot;&gt;AWS monthly bill&lt;/h3&gt;
&lt;p&gt;AWS provides a monthly bill which provides a region and service level visibility. For example, the bill can say that $100 was spent on m5.large instances in ap-south-1 region for the month.&lt;/p&gt;

&lt;h3 id=&quot;cost-explorer&quot;&gt;Cost explorer&lt;/h3&gt;

&lt;p&gt;AWS Cost explorer goes one level deeper by providing a day level visibility. The cost explorer can be very useful to perform preliminary analysis on the cost. For example, the cost explorer can say that an additional m5.large instance runs in the ap-south-1 region every monday.&lt;/p&gt;

&lt;h3 id=&quot;aws-cost-and-usage-report&quot;&gt;AWS Cost and usage report&lt;/h3&gt;

&lt;p&gt;The cost and usage report provides the highest level of visibility with a hourly account of expenses. However, it is also very bulky and requires proper tagging of resources and other tools such as athena or elasticsearch to analyze the data.&lt;/p&gt;

&lt;h4 id=&quot;tagging-your-resources&quot;&gt;Tagging your resources&lt;/h4&gt;

&lt;p&gt;Tagging resources is one of the most important steps to be done for cost optimization. Providing proper tags can be highly effective in identifying the areas of expenses in your infrastructure. To paraphrase Edna mode, “Done properly, Tagging Resources can be a heroic act…Done properly”&lt;/p&gt;

&lt;p&gt;Usually there can be about 3-4 tags common for all resources and other application specific tags. A few common tags which I use are &lt;strong&gt;Application&lt;/strong&gt;, &lt;strong&gt;Stack&lt;/strong&gt;, &lt;strong&gt;Team&lt;/strong&gt;, &lt;strong&gt;Owner&lt;/strong&gt;, &lt;strong&gt;Environment&lt;/strong&gt; etc.,&lt;/p&gt;

&lt;h4 id=&quot;indexing-the-report-to-elasticsearch&quot;&gt;Indexing the report to elasticsearch&lt;/h4&gt;

&lt;p&gt;The Cost and usage report usually is huge and cannot be analysed manually. One way to analyse the data is to index the report to elasticsearch. With a Visualization tool such as kibana or grafana, it is possible to build dashboards and visualize your spends.
I indexed the report to elasticsearch using logstash.&lt;/p&gt;

&lt;p&gt;Another way to analyze the report is to use Amazon Athena to perform relational database like queries on files in S3.&lt;/p&gt;

&lt;p&gt;Some opensource repos which do these are&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/ProTip/aws-elk-billing&quot;&gt;https://github.com/ProTip/aws-elk-billing&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;&lt;a href=&quot;https://github.com/awslabs/aws-detailed-billing-parser&quot;&gt;https://github.com/awslabs/aws-detailed-billing-parser&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;elasticity&quot;&gt;Elasticity&lt;/h2&gt;

&lt;p&gt;The infra requirements change in my company from time to time. For example, we may require more instances during office hours or during the sale. Instead of running instances to handle the maximum load all the time, we can scale up or down the instances as required. The following document by AWS provides much more detail - &lt;a href=&quot;https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html#autoscaling-benefits-example&quot;&gt;https://docs.aws.amazon.com/autoscaling/ec2/userguide/auto-scaling-benefits.html#autoscaling-benefits-example&lt;/a&gt;&lt;/p&gt;

&lt;p&gt;Along with the number of instances, the configuration of each instance is also a contributing factor for both performance and cost. Having the right instance to run the right application requires a profound understanding of the application as well as a scientific way to measure the performance of the application under various conditions.&lt;/p&gt;

&lt;h2 id=&quot;design-for-cost-optimization&quot;&gt;Design for cost optimization&lt;/h2&gt;

&lt;p&gt;While designing applications, we must always consider the cost factor and take decisions which can optimize our spend in the long run. For example, if you have multiple services which require load balancing, using a single ALB with host or path based routing is more cost efficient compared to having separate load balancers for each service. In order to achieve this, the applications must be designed such a way that by adding host or path based routing should not cause side-effects.&lt;/p&gt;

&lt;h2 id=&quot;services&quot;&gt;Services&lt;/h2&gt;

&lt;p&gt;Here, I go through 2-3 commonly used services and list some common tips on optimizing cost&lt;/p&gt;

&lt;h3 id=&quot;ec2&quot;&gt;EC2&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Running hours&lt;/strong&gt; - It is possible to reduce the cost by turning off the instances when not in use. For instance, dev and test environment instances can be turned off on nights, weekends and holidays.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Right Sizing&lt;/strong&gt; - Having the right instance size can play a crucial role in obtaining better performance and lower our costs.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Spot instances&lt;/strong&gt; - AWS provides instances at ridiculously high discounts based on availability of extra capacity called “Spot instances”. However, these instances are available based on demand and have a termination window of 2 minutes. It is a great way to run dev and test workloads or even containers in production. Note that applications such as databases may not play well with these instances.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;EBS Volumes&lt;/strong&gt; - AWS provides different types of volumes to cater various use cases. In case you are using io1 volumes, it would be cheaper and performant to switch to a gp2 volume of a bigger size. To illustrate - 
A io1 volume of size 100 GiB and 1000 IOPS costs about $78 per month in the us-east-1 region. However, a gp2 volume of 400 GiB and 1200 IOPS costs about $37 per month in the same region.&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;EBS Snapshots&lt;/strong&gt; - EBS snapshots are incremental and are internally linked with the sections which have not changed (Refer to &lt;a href=&quot;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html#how_snapshots_work&quot;&gt;https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/EBSSnapshots.html#how_snapshots_work&lt;/a&gt; for more details). Over a period of time, snapshots can be stale since the data in the volume might have changed completely and it may not be useful to have them anymore. In such a case, deleting these snapshots can reduce storage costs. More information is available in here - &lt;a href=&quot;https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-storage-optimization/optimizing-amazon-ebs-storage.html#delete-stale-amazon-ebs-snapshots&quot;&gt;https://docs.aws.amazon.com/whitepapers/latest/cost-optimization-storage-optimization/optimizing-amazon-ebs-storage.html#delete-stale-amazon-ebs-snapshots&lt;/a&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;&lt;strong&gt;Reserved instances&lt;/strong&gt; - There are different types of Reserved instances (RIs) available with AWS. Please consider buying an RI if an instance meets the following criteria&lt;/p&gt;
    &lt;ul&gt;
      &lt;li&gt;The instance runs more than 75% of the time per month.&lt;/li&gt;
      &lt;li&gt;The application run by this instance has a predictable workload and the requirements can be pre-determined.&lt;/li&gt;
      &lt;li&gt;There is no further optimization which can be done for the instance.&lt;/li&gt;
    &lt;/ul&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;s3&quot;&gt;S3&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Storage classes&lt;/strong&gt; - S3 provides various storage classes to serve different use cases. The lifetime of the object stored in S3 is an important factor for the cost. For example, if an object is being stored for 90 days, the infrequent access storage class may be cheaper than Glacier.&lt;/li&gt;
&lt;/ul&gt;

&lt;h3 id=&quot;data-transfer&quot;&gt;Data transfer&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;&lt;strong&gt;Communication between internal applications&lt;/strong&gt; - If there are multiple applications talking to each other within a VPC, Data transfer costs can be minimized by not using Elastic IPs or public endpoints. You can have a private route53 hosted zone accessible inside the VPC instead of referring to private IPs.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;checklists&quot;&gt;Checklists&lt;/h2&gt;

&lt;p&gt;Checklists serve as a way to systematically optimize your resources. For example, a checklist for EC2 instances can be built and verified on each instance. An action can be taken to optimize whenever the criterial in the checklist is not met. These checklists differ for each company. You can find the checklists I used as a sample below -&lt;/p&gt;

&lt;h3 id=&quot;ec2-1&quot;&gt;EC2&lt;/h3&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;1. Can I stop this instance at night and weekends?
	a. Yes - stop the instance at night and weekends. This reduces cost by 50%.
2. Can I use a linux instance instead of windows?
	a. Yes - migrate to using linux. This reduces cost by 23%.
3. Is my maximum CPU utilization &amp;lt; 40% over 30 days?
4. Is my maximum RAM utilization &amp;lt; 40% over 30 days?
	a. If both 3 and 4 are yes, downgrade the instance to 1 level lower (example: m5.xlarge to m5.large)
	b. If only 4 is yes, use a c series instance with same number of cores (since the application is compute intensive)(example: m5.xlarge to c5.xlarge)
	c. If only 3 is yes, use a r series instance with same memory (since the application is memory intensive)(example: m5.xlarge to r5.large)
5. Is my application fault tolerant and does not depend on local file storage (except writing logs. Example - all dockerized applications)?
	a. If yes, then please run spot instances. I would recommend a 50-50 split between on-demand/RI and spot instances
6. Will my instance keep running 24x7 for more than a year?
	a. Then please purchase an RI (Reserved Instance). Decide whether to buy for 1 or 3 years and also to buy No-upfront, partial upfront and full-upfront
7. Am I using the latest generation of EC2 instances?
	a. If no, then please take a downtime to upgrade the instances to the latest generation. Prepare a plan for the migration. This reduces cost by 10-15%.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;s3-1&quot;&gt;S3&lt;/h3&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;1. Do we need the S3 bucket and its data?
  a. If no, then please delete it
2. Do we need all the data in Standard Storage?
	a. If no, then consider uploading new objects in a different storage class such as IA or Glacier. Please refer to Log retention policy to decide on the right storage class
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h3 id=&quot;rds&quot;&gt;RDS&lt;/h3&gt;

&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;1. Can the RDS instance be shut down at night and weekends?
	a. If yes, then shut down the RDS instance at night and weekends. This will reduce cost by 50%
2. Is the RDS instance Multi-AZ and running non-production environments only?
	a. If yes, then turn off Multi-AZ for non-prod environments
3. Is the maximum CPU utilization &amp;lt; 30% over the last 6 months?
	a. If yes, then try to downgrade the instance by 1 level
4. Is the storage space used &amp;lt; 20% over the last 6 months?
	a. If yes, then try to reduce the storage space allocated for the RDS instance
5. Are there databases which are not being used since last 1 year?
	a. If yes, then discuss if they are still required. If not, then remove it.
6. Are there manual database snapshots which are not required?
	a. Then delete it.
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h2 id=&quot;continuous-monitoring&quot;&gt;Continuous monitoring&lt;/h2&gt;

&lt;p&gt;Cost optimization is not a one time exercise. It has to be constantly monitored and the infra has to be reviewed regularly to avoid any leaks. For example, every new instance brought up can be run through the checklist to ensure that we effectively utilize the services provided by AWS.&lt;/p&gt;

&lt;p&gt;I hope this post was useful in providing some insights on optimizing spend on AWS.&lt;/p&gt;
</description>
        <pubDate>Tue, 20 Aug 2019 00:23:00 +0000</pubDate>
        <link>https://ramkumar-kr.github.io/2019/08/20/optimizing-spend-on-aws-infrastructure.html</link>
        <guid isPermaLink="true">https://ramkumar-kr.github.io/2019/08/20/optimizing-spend-on-aws-infrastructure.html</guid>
        
        <category>DevOps</category>
        
        <category>AWS</category>
        
        <category>Cost optimization</category>
        
        
      </item>
    
      <item>
        <title>A Jugaad</title>
        <description>&lt;p&gt;I was given the task of writing a few scripts using maxscript to automate some render based tasks.&lt;/p&gt;

&lt;h2 id=&quot;maxscript&quot;&gt;Maxscript&lt;/h2&gt;

&lt;p&gt;Maxscript is a scripting language for 3DS Max. It helps you to automate tasks and provide UI for them.&lt;/p&gt;

&lt;h2 id=&quot;the-challenge&quot;&gt;The challenge&lt;/h2&gt;

&lt;p&gt;One of the scripts I have to write involved in writing a post render script using &lt;a href=&quot;https://knowledge.autodesk.com/support/3ds-max/troubleshooting/caas/CloudHelp/cloudhelp/2017/ENU/Installation-3DSMax/files/GUID-F6732A30-821C-4547-9FAA-E46BCA13392A-htm.html&quot;&gt;Autodesk Backburner&lt;/a&gt; to save multiple render channels to a path given using user input (choosen using the standard file dialog).
The post render script contained saving all render channels using &lt;a href=&quot;https://docs.chaosgroup.com/display/VRAY3MAX/Controlling+the+VFB+Programmatically&quot;&gt;&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;vfbcontrol()&lt;/code&gt;&lt;/a&gt;
However, the method requires the path of the rendered output to be passed as an argument.
The problems are&lt;/p&gt;
&lt;ul&gt;
  &lt;li&gt;The post render script call for backburner requires the absolute path to the script file.&lt;/li&gt;
  &lt;li&gt;There can be multiple renders happening at a point in time.&lt;/li&gt;
&lt;/ul&gt;

&lt;h2 id=&quot;the-jugaad&quot;&gt;The Jugaad&lt;/h2&gt;

&lt;ul&gt;
  &lt;li&gt;Write a pre-render script which generates the post render script with the file path (One more level and it will become inception).&lt;/li&gt;
  &lt;li&gt;Store the post render script in the same directory as the render output&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The jugaad was done to meet some pressing deadlines. If you think there is a better approach, I would be really thankful if you can tell me about it.&lt;/p&gt;
</description>
        <pubDate>Sun, 08 Jan 2017 00:00:00 +0000</pubDate>
        <link>https://ramkumar-kr.github.io/2017/01/08/a-jugaad/</link>
        <guid isPermaLink="true">https://ramkumar-kr.github.io/2017/01/08/a-jugaad/</guid>
        
        <category>3DS Max</category>
        
        <category>Autodesk Backburner</category>
        
        <category>Maxscript</category>
        
        
        <category>Programming</category>
        
      </item>
    
      <item>
        <title>Running Rspec in bitbucket pipelines</title>
        <description>&lt;p&gt;Recently, I integrated bitbucket pipelines into the development workflow of my organization. We use the rails framework and rspec to run our tests.&lt;/p&gt;

&lt;h2 id=&quot;bitbucket-pipelines&quot;&gt;Bitbucket pipelines&lt;/h2&gt;
&lt;p&gt;This is a new CI system from bitbucket based on docker. A YAML file which is very similar to the docker compose file is required to run the build.
You can know more about bitbucket pipelines from &lt;a href=&quot;https://bitbucket.org/product/features/pipelines&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;

&lt;h2 id=&quot;steps-to-start-using-bitbucket-pipelines&quot;&gt;Steps to start using bitbucket pipelines&lt;/h2&gt;

&lt;h3 id=&quot;get-an-invite&quot;&gt;Get an invite&lt;/h3&gt;
&lt;p&gt;Since pipelines are still in beta, you need an invite to start using bibtbucket pipelines. You can request for an invite from bitbucket from the &lt;a href=&quot;https://bitbucket.org/product/features/pipelines&quot;&gt;pipelines features page&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;understanding-the-structure-of-the-yaml-file&quot;&gt;Understanding the structure of the YAML file&lt;/h3&gt;
&lt;p&gt;Before diving into the specifics, you may need to understand the structure of the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bitbucket-pipelines.yml&lt;/code&gt; file. Bitbucket provides an excellent explanation for this. It can be accessed &lt;a href=&quot;https://confluence.atlassian.com/bitbucket/configure-bitbucket-pipelines-yml-792298910.html&quot;&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;docker-image&quot;&gt;Docker image&lt;/h3&gt;
&lt;p&gt;You need a docker image to start with. I used the offical rails 4.2.4 docker image for this. (note that this build is deprecated and you may have to use the ruby image for newer updates.)
 Private docker images are also supported. The advantage of using a private docker image is that you can install all your dependencies upfront and reduce your build time considerably. You can get more info about using private docker images in &lt;a href=&quot;https://confluence.atlassian.com/bitbucket/use-docker-images-as-build-environments-in-bitbucket-pipelines-792298897.html&quot;&gt;here&lt;/a&gt;&lt;/p&gt;

&lt;h3 id=&quot;installing-dependencies&quot;&gt;Installing dependencies&lt;/h3&gt;
&lt;p&gt;My organization had a bunch of private dependencies. Since I do not use a private docker image, I had to install them during the build. In order to access these private repositories, I created a new pair of RSA keys and cloned them using SSH.&lt;/p&gt;

&lt;p&gt;&lt;strong&gt;Things to note&lt;/strong&gt;&lt;/p&gt;

&lt;ul&gt;
  &lt;li&gt;
    &lt;p&gt;The SSH keys should not have a passphrase. You may have to generate keys without a passphrase. The command I used was 
&lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;ssh-keygen -N '' -t rsa -f ~/.ssh/id_rsa&lt;/code&gt;&lt;/p&gt;
  &lt;/li&gt;
  &lt;li&gt;
    &lt;p&gt;You may need to add a list of known hosts. Example - &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;echo &quot;domain ssh-rsa &amp;lt;some key&amp;gt;&quot; &amp;gt;&amp;gt; ~/.ssh/known_hosts&lt;/code&gt;&lt;/p&gt;
  &lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt;The SSH key can be configured as a secret environment variable. You can also encode the key using base64
 and decode it as a step in the build. More details can be found in this &lt;a href=&quot;https://answers.atlassian.com/questions/38853952/pulling-private-repositories-inside-pipelines&quot;&gt;question&lt;/a&gt;.&lt;/p&gt;

&lt;h3 id=&quot;sending-notifications&quot;&gt;Sending Notifications&lt;/h3&gt;
&lt;p&gt;Currently sending notifications is not supported by pipelines. You can instead make a CURL call to a service such as slack which provides webhooks for sending notifications.&lt;/p&gt;
&lt;div class=&quot;language-plaintext highlighter-rouge&quot;&gt;&lt;div class=&quot;highlight&quot;&gt;&lt;pre class=&quot;highlight&quot;&gt;&lt;code&gt;curl -X POST -H 'Content-type: application/json' \
--data '{&quot;text&quot;:&quot;Build Successful&quot;}' \
 &quot;http://www.example.com&quot;
&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;&lt;/div&gt;

&lt;h4 id=&quot;thats-it-once-you-can-access-your-repositories-and-install-your-dependencies-you-can-just-run-bundle-and-then-run-the-test-command-since-i-use-rspec-here-i-just-run-bundle-exec-rspec&quot;&gt;&lt;em&gt;That’s it!. Once you can access your repositories and install your dependencies, you can just run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bundle&lt;/code&gt; and then run the test command. Since I use rspec here, I just run &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;bundle exec rspec&lt;/code&gt;&lt;/em&gt;&lt;/h4&gt;

</description>
        <pubDate>Sat, 03 Sep 2016 00:00:00 +0000</pubDate>
        <link>https://ramkumar-kr.github.io/2016/09/03/running-rspec-in-bitbucket-pipelines.html</link>
        <guid isPermaLink="true">https://ramkumar-kr.github.io/2016/09/03/running-rspec-in-bitbucket-pipelines.html</guid>
        
        <category>Bitbucket</category>
        
        <category>Pipelines</category>
        
        <category>RSpec</category>
        
        
      </item>
    
      <item>
        <title>Speeding up Rspec with factory_girl</title>
        <description>&lt;p&gt;I use &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;rspec&lt;/code&gt; and &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;factory_girl&lt;/code&gt; in my organization for writing unit tests. However, it used to be very very slow. I took some steps to speed it up.&lt;/p&gt;

&lt;h2 id=&quot;using-build_stubbed-instead-of-build-or-create-for-creating-objects-using-factories-as-much-as-possible&quot;&gt;Using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build_stubbed&lt;/code&gt; instead of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;create&lt;/code&gt; for creating objects using factories as much as possible&lt;/h2&gt;

&lt;p&gt;Using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build_stubbed&lt;/code&gt; builds factories and stubs all associations for the object. There were a lot of specs which could use the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build_stubbed&lt;/code&gt; method instead of &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;create&lt;/code&gt; or &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build&lt;/code&gt; for creation/building of objects from factories. This resulted in a non-requirement of a database connection and neccesity to create associated objects which sped up the tests by about 55-60%.  For more details you can refer to the link –  &lt;a href=&quot;http://here&quot; target=&quot;_blank&quot;&gt;https://robots.thoughtbot.com/use-factory-girls-build-stubbed-for-a-faster-test&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;using-an-in-memory-database&quot;&gt;Using an in-memory database&lt;/h2&gt;

&lt;p&gt;Database was a considerable bottleneck for the performance of our tests. This is a common scenario which happens either due to a slow disk or due to a large number/complexity (or both) of queries being executed.&lt;/p&gt;

&lt;p&gt;To solve this problem, I used the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;memory_test_fix&lt;/code&gt;  gem which uses the sqlite3 database adapter and uses your memory as the database. This sped up the tests by about 18-20%.&lt;/p&gt;

&lt;p&gt;Github link – &lt;a href=&quot;https://github.com/mvz/memory_test_fix&quot; target=&quot;_blank&quot;&gt;https://github.com/mvz/memory_test_fix&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;running-multiple-tests-at-once&quot;&gt;Running multiple tests at once&lt;/h2&gt;

&lt;p&gt;Tests run only in a single process and results in under utilizing the processor (which is usually dual or quad core). Running multiple processes to run tests in parallel resulted in a much shorter duration than running them sequentially.&lt;/p&gt;

&lt;p&gt;I used the &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;parallel_tests&lt;/code&gt; gem and ran tests with 4 processes. This resulted in reducing the duration  by approximately 2.5-3 times. (This should have ideally been 4 times but since I had a lot of applications running the performance obtained is less).&lt;/p&gt;

&lt;p&gt;Github Link – &lt;a href=&quot;https://github.com/grosser/parallel_tests&quot; target=&quot;_blank&quot;&gt;https://github.com/grosser/parallel_tests&lt;/a&gt;&lt;/p&gt;

&lt;h2 id=&quot;results&quot;&gt;Results&lt;/h2&gt;

&lt;p&gt;I had a test suite containing 100 unit tests. build_stubbed was used for only 4 tests (I should have done this for atleast 40 and ran the tests). Making improvements one by one on the same suite reduced the overall execution time from 20 minutes to 6 minutes. Here’s a Before-After table containing execution times (rounded to the minute).&lt;/p&gt;

&lt;table&gt;
  &lt;thead&gt;
    &lt;tr&gt;
      &lt;th&gt;Improvement Done&lt;/th&gt;
      &lt;th&gt;Before&lt;/th&gt;
      &lt;th&gt;After&lt;/th&gt;
    &lt;/tr&gt;
  &lt;/thead&gt;
  &lt;tbody&gt;
    &lt;tr&gt;
      &lt;td&gt;Using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;build_stubbed&lt;/code&gt; for 4 out of 100 tests&lt;/td&gt;
      &lt;td&gt;20 minutes&lt;/td&gt;
      &lt;td&gt;19 minutes&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Using an in-memory database&lt;/td&gt;
      &lt;td&gt;19 minutes&lt;/td&gt;
      &lt;td&gt;16 minutes&lt;/td&gt;
    &lt;/tr&gt;
    &lt;tr&gt;
      &lt;td&gt;Using &lt;code class=&quot;language-plaintext highlighter-rouge&quot;&gt;parallel_tests&lt;/code&gt; gem with 4 processes&lt;/td&gt;
      &lt;td&gt;16 minutes&lt;/td&gt;
      &lt;td&gt;6 minutes&lt;/td&gt;
    &lt;/tr&gt;
  &lt;/tbody&gt;
&lt;/table&gt;
</description>
        <pubDate>Sat, 25 Jun 2016 17:48:02 +0000</pubDate>
        <link>https://ramkumar-kr.github.io/2016/06/25/speeding-up-rspec-with-factory_girl/</link>
        <guid isPermaLink="true">https://ramkumar-kr.github.io/2016/06/25/speeding-up-rspec-with-factory_girl/</guid>
        
        <category>Ruby</category>
        
        <category>RSpec</category>
        
        <category>Factory girl</category>
        
        
        <category>Programming</category>
        
      </item>
    
      <item>
        <title>Extension development in firefox and chrome</title>
        <description>&lt;p&gt;I developed an extension in both chrome and firefox which provides a new tab page resembling a speed dial. I’ll just highlight the advantages and disadvantages of development in both the browsers.&lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;chrome&quot;&gt; Chrome&lt;/h2&gt;

&lt;h4 id=&quot;advantages&quot;&gt;Advantages&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;Chrome has a very good &lt;a href=&quot;https://developer.chrome.com/extensions/api_index&quot;&gt;documentation&lt;/a&gt; and &lt;a href=&quot;https://developer.chrome.com/extensions/getstarted&quot;&gt;tutorials&lt;/a&gt; having a step by step explanation for a few &lt;a href=&quot;https://developer.chrome.com/extensions/samples&quot;&gt;examples&lt;/a&gt; as well.&lt;/li&gt;
  &lt;li&gt;Since the APIs were based on javascript, no more additional learning was required to start.&lt;/li&gt;
  &lt;li&gt;Testing or Trying out the extension in chrome on Mac or Linux is pretty simple as well.
    &lt;ol&gt;
      &lt;li&gt;Go to chrome://extensions and enable developer preview&lt;/li&gt;
      &lt;li&gt;Click on the Load unpacked extension and select the folder having the extension code&lt;/li&gt;
      &lt;li&gt;Your extension is now loaded to the browser&lt;/li&gt;
    &lt;/ol&gt;
  &lt;/li&gt;
  &lt;li&gt;Google Analytics works very well with the extension&lt;/li&gt;
  &lt;li&gt;Publishing updates to your extension is hassle free and all your users are updated to the latest version within a few minutes&lt;/li&gt;
  &lt;li&gt;Ability to have the extensions paid and can be a source of income.&lt;/li&gt;
  &lt;li&gt;An extension written for chrome will work in chromium and is most likely to work in opera , vivaldi and other chromium based browsers as well.&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;disadvantages&quot;&gt;Disadvantages&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;An amount of &lt;strong&gt;$5.00&lt;/strong&gt; needs to be paid to google for publishing your first app or extension on the chrome web store.&lt;/li&gt;
  &lt;li&gt;Trying out the extension in a developer mode in Windows with a stable version of chrome can be very annoying  since chrome asks you to disable the extension every time the browser is opened.&lt;/li&gt;
  &lt;li&gt;Stable versions of chrome on Windows  disables all the extensions which are not from the chrome web store. These extensions cannot be enabled again either manually or using an API. You can read more about it in &lt;a href=&quot;http://www.howtogeek.com/191364/how-do-you-re-enable-non-web-store-extensions-in-the-stable-and-beta-channels-of-chrome/&quot;&gt;here&lt;/a&gt;.&lt;/li&gt;
  &lt;li&gt;There is a cap of 20 apps/extensions/themes (combined) which can be published to the chrome web store. If you want to publish more, you need to get an approval from google for this.&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;hr /&gt;

&lt;h2 id=&quot;firefox&quot;&gt;Firefox&lt;/h2&gt;

&lt;h4 id=&quot;advantages-1&quot;&gt;Advantages&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt; No payment required to publish the addon&lt;/li&gt;
  &lt;li&gt;Very easy to translate a chrome extension to a firefox addon. Only the manifest file format needs to change&lt;/li&gt;
  &lt;li&gt;Testing, building and trying out the extension in any operating system is possible&lt;/li&gt;
  &lt;li&gt;Extensive libraries for customization options which can change the entire look and feel of the browser.&lt;/li&gt;
  &lt;li&gt;No cap on the number of extensions you can release to the add on store&lt;/li&gt;
  &lt;li&gt;Ability to distribute the extension outside the addon store&lt;/li&gt;
&lt;/ul&gt;

&lt;h4 id=&quot;disadvantages-1&quot;&gt;Disadvantages&lt;/h4&gt;

&lt;ul&gt;
  &lt;li&gt;Each review takes about a week to complete&lt;/li&gt;
  &lt;li&gt;Multiple libraries can be used to implement a single functionality. For example, the new tab page can be overriden either by XUL or by using the new tab override API provided by firefox&lt;/li&gt;
  &lt;li&gt;Poor or no documentation for the libraries&lt;/li&gt;
  &lt;li&gt;Accessing external scripts from the extension is not allowed. This would require you to have all the library scripts ( such as jquery) as a part of your add on therby increasing the addon size&lt;/li&gt;
  &lt;li&gt;Dwindling user base…..The number of users for firefox is going down in the past few months. This forces me to think if the effort required for writing an extension is worthy or not&lt;/li&gt;
  &lt;li&gt;No Pricing – You can request for users to donate some amount for the development and maintainence of the addon. This has discouraged many companies to create addons for firefox&lt;/li&gt;
&lt;/ul&gt;

&lt;hr /&gt;

&lt;h3 id=&quot;footnotes&quot;&gt; Footnotes&lt;/h3&gt;

&lt;ul&gt;
  &lt;li&gt;All the points are classified under advantage/disadvantage are purely in my opinion&lt;/li&gt;
  &lt;li&gt;In case you want to try out my extension, you can find the links below&lt;/li&gt;
  &lt;li&gt;Chrome – &lt;a href=&quot;https://chrome.google.com/webstore/detail/new-tab/dbnbjnjckidjkjdocfflalcgmlhkcfee&quot;&gt;https://chrome.google.com/webstore/detail/new-tab/dbnbjnjckidjkjdocfflalcgmlhkcfee&lt;/a&gt;&lt;/li&gt;
  &lt;li&gt;Firefox – &lt;a href=&quot;https://addons.mozilla.org/en-US/firefox/addon/yet-another-new-tab-page/&quot;&gt;https://addons.mozilla.org/en-US/firefox/addon/yet-another-new-tab-page/?src=search&lt;/a&gt;&lt;/li&gt;
&lt;/ul&gt;

&lt;p&gt; &lt;/p&gt;

&lt;p&gt; &lt;/p&gt;
</description>
        <pubDate>Sat, 09 Apr 2016 11:17:31 +0000</pubDate>
        <link>https://ramkumar-kr.github.io/2016/04/09/extension-development-in-firefox-and-chrome/</link>
        <guid isPermaLink="true">https://ramkumar-kr.github.io/2016/04/09/extension-development-in-firefox-and-chrome/</guid>
        
        <category>Chrome</category>
        
        <category>Firefox</category>
        
        <category>Webextensions</category>
        
        
      </item>
    
  </channel>
</rss>
