<?xml version="1.0" encoding="UTF-8"?><rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:atom="http://www.w3.org/2005/Atom" version="2.0" xmlns:cc="http://cyber.law.harvard.edu/rss/creativeCommonsRssModule.html">
    <channel>
        <title><![CDATA[Sourcerer Blog - Medium]]></title>
        <description><![CDATA[All the work that a SWE does is largely forgotten after said feature, product, or fix has been released. We are a small group of software engineers who believe that this should not be the case. We believe an engineer&#39;s work can tell a story and so created https://sourcerer.io. - Medium]]></description>
        <link>https://blog.sourcerer.io?source=rss----b33180f5facf---4</link>
        
        <generator>Medium</generator>
        <lastBuildDate>Mon, 06 Apr 2026 00:53:05 GMT</lastBuildDate>
        <atom:link href="https://blog.sourcerer.io/feed" rel="self" type="application/rss+xml"/>
        <webMaster><![CDATA[yourfriends@medium.com]]></webMaster>
        <atom:link href="http://medium.superfeedr.com" rel="hub"/>
        <item>
            <title><![CDATA[Python Internals: An Introduction]]></title>
            <link>https://blog.sourcerer.io/python-internals-an-introduction-d14f9f70e583?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/d14f9f70e583</guid>
            <category><![CDATA[computer-science]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[python-interpreter]]></category>
            <category><![CDATA[cpython]]></category>
            <category><![CDATA[python]]></category>
            <dc:creator><![CDATA[Michael Prantl]]></dc:creator>
            <pubDate>Fri, 07 Aug 2020 19:16:16 GMT</pubDate>
            <atom:updated>2020-10-01T09:29:30.838Z</atom:updated>
            <content:encoded><![CDATA[<h4>“Is Python compiled or interpreted? Both.”</h4><h4>A Lovely Stroll From Launching CPython to Code Execution</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*PQV6HS5D90zjtD0Y" /><figcaption>Photo by <a href="https://unsplash.com/@naszymokiem?utm_source=medium&amp;utm_medium=referral">Łukasz Maźnica</a> on <a href="https://unsplash.com?utm_source=medium&amp;utm_medium=referral">Unsplash</a></figcaption></figure><p><strong><em>Disclaimer:</em></strong><em> This article may contain more C code than Python code.</em></p><p>Python is fascinating and probably the closest that humanity has come so far to executable pseudo code. It attracts plenty of people who never coded before to come and try and possibly discover entirely new talents — like me. A couple of years down the line, I have moved on to core topics of Computer Science, stretching the entire curriculum of a Computer Science Degree. I have not forgotton where I came from and I look back with bliss at the time, where I bent my mind admittedly over the simplest concepts, like loops, functions and classes. Yet, the idea that I would need to run a separate program to execute my code, while since my childhood all it took to run any program on my computer was to double click a single .exe file on screen, was long time beyond me.</p><p>In this one and the next articles, I would like to explore the inner life of the CPython interpreter, foremost its runtime environment. It is not difficult to find material related to other internal features of the Python interpreter — like the compilation process or the interpretation of Bytecode. My overview is supposed to be focussing on the machine aspect of Python’s inner virtual machine. A deep dive into the source is going to be inevitable to see and understand what is actually going on. Let us first naively stroll down the callstack und miraculously gaze at how the runtime is evolving around us. We will come to the more intriguing questions eventually.</p><p>In the end, I would like to find the answers to the following questions: What does the machine in the Python virtual machine look like? How does it manage processes and threads? What is the memory layout inside the virtual machine?</p><p><em>From here on Python is refering to CPython version 3.9, the latest as of the time of writing. I used Windows 10 64-bit and Visual Studio 2019 to build and analyse the source. The things discussed here will likely not hold true for other implementations of Python, like </em><a href="https://www.jython.org/"><em>Jython</em></a><em> or </em><a href="https://ironpython.net/"><em>IronPython</em></a><em>. I expect the differences in using another operating system to be rather obvious whenever they occur.</em></p><p><em>I have edited and annotated any source code shown here for brevity, clarity and readability. I highly recommend reading </em><a href="https://github.com/python/cpython/tree/3.9"><em>the CPython source</em></a><em> alongside for the having full picture.</em></p><h3><strong>The High-Level Overview</strong></h3><p>Is Python a compiled or interpreted language? Both. Python is compiled into bytecode, which is then interpreted by a <em>Virtual Machine</em>. When we feed the interpreter with Python source code, we can conceptionally imagine two steps taking place:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*0qujc2dO_5rSOVi3hOXIKA.jpeg" /><figcaption>Fig. 1 Python process steps from launching the interpreter to code execution.</figcaption></figure><p>This is of course an overly simplified model. We will later have a brief look at the components of the Python compiler, but our focus will be the anatomy of the interpreter and its runtime.</p><h4>The Project Layout</h4><p>To gain a better orientation before diving into the source code, it might help to make oneself familiar with how the source directory of the CPython project is organised:</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*6jneXKcpQEASqRDTbkckxw.jpeg" /><figcaption>Fig. 2 The source tree of CPython tentatively grouped for oversight.</figcaption></figure><p>The interpreter is implemented as a shared library in the three subdirectories <a href="https://github.com/python/cpython/tree/3.9/Objects">Objects/</a>, <a href="https://github.com/python/cpython/tree/3.9/Include">Include/</a>, and <a href="https://github.com/python/cpython/tree/3.9/Python">Python/</a>. The implementation of Python’s standard library lives in a separate folder, but the C extension modules — located in the <a href="https://github.com/python/cpython/tree/3.9/Modules">Modules/</a> directory— also use the Python headers, effectively loading parts of the interpreter as a library. The same is true for third-party libraries like numpy which are implemented via the <a href="https://docs.python.org/3/c-api/index.html">Python/C API</a>.</p><h3>Traversing Down `main`</h3><p>The <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Programs/python.c#L7">main</a> function shown below is located in <a href="https://github.com/python/cpython/blob/3.9/Programs/python.c">Programs/python.c</a>. But the actual entry point to the interpreter is <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Modules/main.c#L708">Py_Main</a> located in <a href="https://github.com/python/cpython/blob/3.9/Modules/main.c">Modules/main.c</a>.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/8973f32e6f2e520e01de97901eeda56c/href">https://medium.com/media/8973f32e6f2e520e01de97901eeda56c/href</a></iframe><p>Even if you have not programmed natively before, you are probably familiar with a <a href="https://en.wikipedia.org/wiki/Entry_point">main</a> function. But what is <a href="https://docs.microsoft.com/en-us/cpp/c-language/using-wmain?view=vs-2019">wmain</a>? Well, Windows supports both 8-bit ANSI character types as well as UTF-16, the native character type on Windows. In addition to the standard C character strings, the Windows API provides variants to all its functions accepting also the native Windows wide character strings, or UTF-16.</p><p>Three levels down the call stack, we pass by <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Modules/main.c#L692">pymain_main</a>. So far, we have fiddled a bit with command line arguments. Next a couple of initilization routines are running where a configuration object is assembled from command line arguments and environment variables. Two levels further, inside <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Modules/main.c#L539">pymain_run_python</a>, we are reaching a crossing point:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/18b59edaced2cd8c1d75d4b1cc84e583/href">https://medium.com/media/18b59edaced2cd8c1d75d4b1cc84e583/href</a></iframe><p>Depending on how it was invoked, the interpreter has to decide in which mode to run. Interesting note on the side: The Python interpreter has a source code representation of itself with the <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Include/internal/pycore_interp.h#L71">PyInterpreterState</a> struct. The command line flags and arguments provided determine how to continue from here:</p><ul><li>The -c flag invokes the <a href="https://github.com/python/cpython/blob/a7dc71470156680f1fd5243290c6d377824b7ef4/Modules/main.c#L226">pymain_run_command</a> branch</li><li>The -m flag takes the <a href="https://github.com/python/cpython/blob/a7dc71470156680f1fd5243290c6d377824b7ef4/Modules/main.c#L257">pymain_run_module</a> branch</li><li>If instead a file name is provided, <a href="https://github.com/python/cpython/blob/a7dc71470156680f1fd5243290c6d377824b7ef4/Modules/main.c#L304">pymain_run_file</a> is called</li><li>Else read anything that has potentially been piped in via &lt;stdin&gt; and enter the interactive mode (<a href="https://github.com/python/cpython/blob/a7dc71470156680f1fd5243290c6d377824b7ef4/Modules/main.c#L483">pymain_run_stdin</a> and <a href="https://github.com/python/cpython/blob/a7dc71470156680f1fd5243290c6d377824b7ef4/Modules/main.c#L539">pymain_repl</a>)</li></ul><h4>The Bit Between Writing Code and Running Code: The Compiler</h4><p>Regardless of which branch is taken, at one point the interpreter will have to take in Python source code and compile it. The Python compilation process involves four separate tansformations.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*IGN_USP1pFA42VIbDv4UDw.jpeg" /><figcaption>Fig. 3 Python compiler transformations from source to bytecode.</figcaption></figure><p>The raw source is first recieved by the parser for tokenization. The tokens are arranged as nodes in a <em>Parse Tree</em>, representing the lexical structure of the code. The Parse Tree is then transformed into an <em>Abstract Syntax Tree</em> (AST), where the tokens are grouped and interpreted as syntactic elements. The “grammar” of the Python language determines, whether a stream of lexical tokens represents syntactically correct Python code. Third, the AST is transformed into a <em>Control Flow Graph</em> (CFG). The CFG still has a tree structure. The compiler therefore must first flatten the graph before it can generate <em>Bytecode</em>. Finally, the compiler emits its output in the form of <em>code objects</em>, which contain the generated bytecode bundled with extra information necessary for the execution of this code unit.</p><p>Code objects are full-fledged Python objects as suggested by a) the object’s name and b) the PyObject_HEAD field in the <a href="https://github.com/python/cpython/blob/b8f704d2190125a7750b50cd9b67267b9c20fd43/Include/cpython/code.h#L18">PyCodeObject</a> struct. That also means that they are fully inspectable at runtime — for example like so:</p><pre>&gt;&gt;&gt; def foo(x, y):<br>...     return x + y<br>...<br>&gt;&gt;&gt; foo.__code__<br>&lt;code object foo at 0x00...0F50, file &quot;&lt;stdin&gt;&quot;, line 1&gt;</pre><pre>&gt;&gt;&gt; foo.__code__.co_code<br>b&#39;|\x00|\x01\x17\x00S\x00&#39;</pre><p>We can see that the bytecode representation is very compact. In case one prefers a more human readable representation, one can use the <a href="https://docs.python.org/3.9/library/dis.html">dis module</a> which is part of Python’s standard library:</p><pre>&gt;&gt;&gt; from dis import dis<br>&gt;&gt;&gt; dis(foo.__code__)<br>  1           0 LOAD_FAST                0 (x)<br>              2 LOAD_FAST                1 (y)<br>              4 BINARY_ADD<br>              6 RETURN_VALUE</pre><p>This is the disassembled version of the bytecode emitted by the compiler for our foo function. Does the Python compiler optimize code? Yes, it does — but to a very limited degree. For example, it eliminates dead code and folds simple constant expressions. If you are interested in the kind of optimizations that Python applies, have a look at <a href="https://github.com/python/cpython/blob/b3fbff7289176ba1a322e6899c3d4a04880ed5a7/Python/peephole.c#L229">PyCode_Optimize</a> in <a href="https://github.com/python/cpython/blob/b3fbff7289176ba1a322e6899c3d4a04880ed5a7/Python/peephole.c">Python/peephole.c</a>. Before continuing our traversal, I would like to point out one oddly named field in <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Include/cpython/code.h#L18">PyCodeObject</a>, called <a href="https://github.com/python/cpython/blob/b8f704d2190125a7750b50cd9b67267b9c20fd43/Include/cpython/code.h#L43">void *co_zombieframe</a>. It is an artifact of Python’s memory management strategy and it will reappear later in a more sensible context when talking about memory management.</p><p>This is as much as we are going to discuss the compiler. Interested readers can find more detailed information in the <em>Python Developer Guide</em> (s. [5]) and although it is a bit dated by now, I can also highly recommend Eli Bendersky’s Blog (s. [2]).</p><h4>The Final Mile: Code Objects and Code Evaluation</h4><p>Back to where we were: Retrieving our source code is a bit more involved when running in interactive mode or when running a module in Python, because Python has more work to do gathering all the necessary files or prompting input from the user. In fact, when you are running a module, the interpreter actually dispatches most of the responsibility to the <a href="https://docs.python.org/3.9/library/runpy.html">runpy</a> module — also part of Python’s standard library. But regardless of the interpreter mode, paths converge again when the code object is handed to <a href="https://github.com/python/cpython/blob/564cd187677ae8d1488c4d8ae649aea34ebbde07/Python/pythonrun.c#L1101">run_eval_code_obj</a>. From there it is passed down some unspectacular functions until it hits <a href="https://github.com/python/cpython/blob/564cd187677ae8d1488c4d8ae649aea34ebbde07/Python/ceval.c#L4098">_PyEval_EvalCode</a>. Meanwhile, we have arrived in <a href="https://github.com/python/cpython/blob/3.9/Python/ceval.c">Python/ceval.c</a>.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/3b67663fb691618ab75f437f2f69b71c/href">https://medium.com/media/3b67663fb691618ab75f437f2f69b71c/href</a></iframe><p>Here are three objects that are tightly connected to teach other: The already familiar <a href="https://github.com/python/cpython/blob/b8f704d2190125a7750b50cd9b67267b9c20fd43/Include/cpython/code.h#L18">PyCodeObject</a>, <a href="https://github.com/python/cpython/blob/cb9879b948a19c9434316f8ab6aba9c4601a8173/Include/cpython/frameobject.h#L28">PyFrameObject</a>, and <a href="https://github.com/python/cpython/blob/b4d5a5cca29426a282e8f1e64b2271fdd1f0a23e/Include/cpython/pystate.h#L47">PyThreadState</a>. The original function including the parts that I have omitted is a bit of a mouthful. Most of it serves in the initialization of the frame object. A frame object can be understood as a runtime representation of a code object, not unlike a process is a runtime representation of a program.</p><p>If you already had some familiarity with Python’s internal architecture, you may already know what comes next: The core interpreter loop with an infamously large switch-statement.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/073db7c65338877065594b918db0af11/href">https://medium.com/media/073db7c65338877065594b918db0af11/href</a></iframe><p>Huh, odd... This is not the 2000+ lines core evaluation loop that we expected to see. The evaluation function for this frame is dynamically invoked through a function pointer that had been stored inside the <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Include/internal/pycore_interp.h#L71">PyInterpreterState</a> instance at some point earlier during initialization. The function pointer is opaque to where precisely it dispatches, but its naming gives a clue. To understand what happens beyond this point, let us step back again and look at the initialization process.</p><h3>The Runtime</h3><p>Going back to where we came from, remember that we passed the first initialization routine in <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Modules/main.c#L692">pymain_main</a> — the third level in the call stack. We briefly mentioned that initialization took place, but stepped over the call to <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Modules/main.c#L34">pymain_init</a>.</p><h4>Runtime State</h4><p>The initialization of Python takes place in three distinct steps. The first is the initialization of the Python runtime. The <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Python/pylifecycle.c#L66">_PyRuntime</a> is statically initialized in <a href="https://github.com/python/cpython/blob/3.9/Python/pylifecycle.c">Python/pylifecycle.c</a>. It is a <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Include/internal/pycore_runtime.h#L52">_PyRuntimeState</a> struct, which itself is defined in <a href="https://github.com/python/cpython/blob/3.9/Include/internal/pycore_runtime.h#L52">Include/internal/pycore_runtime.h</a>. It monitors a number of behind-the-scenes-states not directly exposed to userspace.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*y83UlP15pKuRoPxBgrNMPg.jpeg" /><figcaption>Fig. 4 The runtime state structure.</figcaption></figure><p>Figure 4 shows the three fields of the runtime state, which are going to be of most interest going forward. The first field appears to be a linked list of <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Include/internal/pycore_runtime.h#L72"><em>interpreter states</em></a><em>. </em>The next two fields point two subsystems whose naming is a bit less obvious:<em> </em><a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Include/internal/pycore_runtime.h#L16">_ceval_runtime_state</a> and <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Include/internal/pycore_runtime.h#L27">_gilstate_runtime_state</a>.</p><p>The ceval state is the first subsystem that gets initialized by <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Python/pystate.c#L48">_PyRuntimeState_Init_impl</a> in <a href="https://github.com/python/cpython/blob/3.9/Python/pystate.c">Python/pystate.c</a>. The ceval state is a proxy for <a href="https://github.com/python/cpython/blob/3.9/Python/ceval.c">Python/ceval.c</a>. Its responsiblity is to manage and ensure save access to the frame evaluator — <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Python/ceval.c#L890">_PyEval_EvalFrameDefault</a>, which we have not looked at yet. Oddly enough, the evaluator is not bound to the ceval state, but — as we have seen before — it is referenced and called through the running interpreter state instance. Ceval state still fulfills a crucial role in the evaluation of frame objects: It keeps track of who is currently in possession of the <em>Global Interpreter Lock</em> (GIL) and who is therefore allowed to enter the evaluation loop. What precisely is the GIL? This is a heavily debated relic of the of the time when mulit-threading still meant multiple threads executing on one core, not many. The role of the GIL and what it does will become clearer in the next article, when we are talking about parallelism and data coherence.</p><p>After setting the locale and some environment variables, the default allocator for the interpreter is configured. There are four different allocators available and each comes in three domain specific flavours: <a href="https://github.com/python/cpython/blob/b4d5a5cca29426a282e8f1e64b2271fdd1f0a23e/Include/cpython/pymem.h#L27">PYMEM_DOMAIN_RAW</a>, <a href="https://github.com/python/cpython/blob/b4d5a5cca29426a282e8f1e64b2271fdd1f0a23e/Include/cpython/pymem.h#L30">PYMEM_DOMAIN_MEM</a> and <a href="https://github.com/python/cpython/blob/b4d5a5cca29426a282e8f1e64b2271fdd1f0a23e/Include/cpython/pymem.h#L33">PYMEM_DOMAIN_OBJ</a>. The last one is obviously the domain for allocating Python objects, while the prior two are passing allocation requests through to the system allocator and differ only in whether they control for thread safety or not. The available allocators are <em>default</em>, <em>debug</em>, <em>pymalloc </em>— incidently the default allocator — and <em>malloc</em>.</p><h4>Interpreter, Garbage Collector and the Main Thread</h4><p>The last bit of the initialization takes place in two steps: First the core initialization and then the main initialization. <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Python/pylifecycle.c#L903">pyinit_core</a> in <a href="https://github.com/python/cpython/blob/3.9/Python/pylifecycle.c">Python/pylifecycle.c</a> creates the first interpreter instance and with it the first thread. In <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Python/pystate.c#L197">PyInterpreterState_New</a> we find what we were looking for:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/85d0dc4469b735e4581db30eb8123e18/href">https://medium.com/media/85d0dc4469b735e4581db30eb8123e18/href</a></iframe><p>The interpreter receives its frame evaluator: A function pointer to <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Python/ceval.c#L890">_PyEval_EvalFrameDefault</a>. The new interpreter is then added to the list of runtime interpreters that we have seen in figure 4. We can spot one more important submodule of the interpreter being intiailized: the <em>Garbage Collector</em>.</p><p>The initalization of the garbarge collector is done by <a href="https://github.com/python/cpython/blob/fe928b32daca184e16ccc0ebdc20314cfa776b98/Modules/gcmodule.c#L132">_PyGC_InitState</a> in <a href="https://github.com/python/cpython/blob/3.9/Modules/gcmodule.c">Modules/gcmodule.c</a>. Without knowing yet the precise inner workings, we see the garbage collector receiving an array of three GC generations. Each generation in turn has a head pointer to a linked list of objects that are tracked by the garbage collector. The garbage collector itself maintains a head pointer to the generation zero list of objects.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*UtlffvvTrUTX0k61FT8pFg.jpeg" /><figcaption>Fig. 5 The garbage collector runtime state.</figcaption></figure><p>The <em>count </em>is the number of live objects currently tracked in each generation and <em>threshold </em>is the number of tracked objects per generation that will trigger a collection attempt by the garbage collector. The thresholds are initialized to 700, 10, 10 for the first, second and third generation respectively. You may think that the thresholds appear low enough that they should be frequently hit even by average applications. After all, every integer, every float is a PyObject with all the associated baggage. But the Python garbage collector in fact does not track every type of Python object, but only those at risk of creating reference cycles — primarily mutable containers. Reference cycles prevent the reference count of an object to ever reach zero and therefore, from being deallocated — in other words: they leak memory. The garbage collector’s role in Python is to prevent this from happening.</p><p>Next a new <a href="https://github.com/python/cpython/blob/b4d5a5cca29426a282e8f1e64b2271fdd1f0a23e/Include/cpython/pystate.h#L47">PyThreadState</a> is created. It receives a accessor method to retrieve its own current frame and then takes the Global Interpreter Lock.</p><p>After the runtime core has been initialized, the first interpreter and its first thread is up and running. The main initialization, which finalizes the initialization process, lastly sets up all the builtin modules, like <a href="https://docs.python.org/3.9/library/sys.html">sys</a> and <a href="https://docs.python.org/3.9/library/__main__.html">__main__</a>, and other features exposed to the Python programmer.</p><p>With the frame evaluator we have all the building blocks together to enter the evaluation loop: An <em>interpreter </em>and a <em>thread </em>providing the necessary context for the evaluation, a <em>frame object</em> to evaluate and the associated <em>code object</em> with the list of opcodes. Before we proceed, I highly recommend taking a look at <a href="https://github.com/python/cpython/blob/da7d1f04086598a29f77bd452beefe847d038344/Python/ceval.c#L1390">the main evaluation loop</a> — it is well document, it is massive, it is beautiful, it is most of all outstandingly intuitive to read, which is an achievement in itself.</p><h3>Conclusion</h3><p>The attentive reader will have noticed, that I have not in fact answered the initially posed questions to a satisfying degree. But for the sake of my own sanity and the attention span of everyone else, we are going to spend time on each question in a dedicated article respectively, looking at the Python runtime from various angles.</p><p>In this article we have approached the CPython source code from a naive, but hopefully intuitive perspective: We started from the entry point of the interpreter, gradually proceeding further downward the callstack and closely observing how the execution environment evolves. In the process we have located a whole series of crucial subsystems almost in a drive-by fashion.</p><p>The next article will be concerned with Python processes and threads. We are going to explore in more detail the roles that the <em>interpreter state</em>, the <em>thread state</em> and <em>frame objects</em> play in the evaluation of Python code. We are going to look at how Python organises and retrieves data at runtime, how it maintains data coherence and order of execution in a multi-threaded, multi-process environment, and in the process lift the mystery around the <em>Global Interpreter Lock</em>.</p><p>Lastly, we are going to have a look at Python’s memory management strategy. This involves Python’s strategy for allocating and deallocating objects. But it also touches the life time management of objects and how to use the available memory space most efficiently to reduce performance overhead.</p><p>It is going to be tough work, but I am already looking forward to it.</p><h3>References</h3><p>[1] A. Shaw: <em>Your Guide to the CPython Source Code</em>. Real Python, 2019, <a href="https://realpython.com/cpython-source-code-guide/">https://realpython.com/cpython-source-code-guide/</a>. Last visited Aug. 02, 2020.</p><p>[2] E. Bendersky: <em>Python internals</em>. 2009–2015, <a href="https://eli.thegreenplace.net/tag/python-internals">https://eli.thegreenplace.net/tag/python-internals</a>. Last visited Aug. 02, 2020.</p><p>[3] G. v. Rossum: <em>The History of Python. Python’s Design Philosophy</em>. 2009, <a href="http://python-history.blogspot.com/2009/01/pythons-design-philosophy.html">http://python-history.blogspot.com/2009/01/pythons-design-philosophy.html</a>. Last visited Aug. 02, 2020.</p><p>[4] P. Guo: <em>CPython internals: A ten-hour codewalk through the Python interpreter source code</em>. 2014, <a href="http://pgbovine.net/cpython-internals.htm">http://pgbovine.net/cpython-internals.htm</a>. Last visited Aug. 02, 2020.</p><p>[5] Python Developer’s Guide: <em>Design of CPython’s Compiler</em>. Python Software Foundation, 2020, <a href="https://devguide.python.org/compiler/">https://devguide.python.org/compiler/</a>. Last visited Aug. 02, 2020.</p><p>[6] Python Developer’s Guide: <em>Exploring CPython’s Internals</em>. Python Software Foundation, 2020, <a href="https://devguide.python.org/exploring/">https://devguide.python.org/compiler/</a>. Last visited Aug. 02, 2019.</p><p>[7] Y. Aknin: <em>Python’s Innards</em>. 2020, <a href="https://tech.blog.aknin.name/category/my-projects/pythons-innards/">https://tech.blog.aknin.name/category/my-projects/pythons-innards/</a>. Last visited Aug. 02, 2020.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=d14f9f70e583" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/python-internals-an-introduction-d14f9f70e583">Python Internals: An Introduction</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Docker image in production — 1GB or 100MB]]></title>
            <description><![CDATA[<div class="medium-feed-item"><p class="medium-feed-image"><a href="https://blog.sourcerer.io/docker-image-in-production-1gb-or-100mb-a455ed5eb461?source=rss----b33180f5facf---4"><img src="https://cdn-images-1.medium.com/max/1044/1*sWfGa02Emst67fLb6J8mZg.png" width="1044"></a></p><p class="medium-feed-snippet">Today someone said to me: &#x201C;Actually, running an application using docker is very simple, such as a node, it only takes a few lines to&#x2026;</p><p class="medium-feed-link"><a href="https://blog.sourcerer.io/docker-image-in-production-1gb-or-100mb-a455ed5eb461?source=rss----b33180f5facf---4">Continue reading on Sourcerer Blog »</a></p></div>]]></description>
            <link>https://blog.sourcerer.io/docker-image-in-production-1gb-or-100mb-a455ed5eb461?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/a455ed5eb461</guid>
            <category><![CDATA[optimization]]></category>
            <category><![CDATA[docker]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Itchishiki Satoshi]]></dc:creator>
            <pubDate>Sat, 25 Jan 2020 16:45:08 GMT</pubDate>
            <atom:updated>2020-01-25T16:45:08.065Z</atom:updated>
        </item>
        <item>
            <title><![CDATA[Using AI to keep engineers happy at work]]></title>
            <link>https://blog.sourcerer.io/using-ai-to-keep-engineers-happy-at-work-3364c4ae3989?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/3364c4ae3989</guid>
            <category><![CDATA[talent-management]]></category>
            <category><![CDATA[software-engineering]]></category>
            <category><![CDATA[engineering-mangement]]></category>
            <category><![CDATA[human-resources]]></category>
            <category><![CDATA[employee-engagement]]></category>
            <dc:creator><![CDATA[Ryan Osilla]]></dc:creator>
            <pubDate>Fri, 31 May 2019 15:49:46 GMT</pubDate>
            <atom:updated>2019-05-31T15:49:46.366Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/480/1*BQXLalAZPa3qbVFkvHqleg.gif" /></figure><p>Sourcerer has come a long way from <a href="https://medium.com/@sergey_surkov">Sergey</a> and I’s <a href="https://medium.com/@sergey_surkov/a-swes-thought-exercise-a1e6eec4709">original idea</a>. Since then, we’ve managed to create and grow the only engineering online resume that is built entirely from an engineer’s commits. This resume is used by the community to not only find their perfect job by matching their confirmed skills and abilities to the repositories of <a href="https://sourcerer.io/talent">potential employers</a> but to also learn about themselves, connect with others, and grow professionally as an engineer.</p><p>Check out a few of our resumes:</p><ol><li><a href="https://sourcerer.io/wanghuaili">https://sourcerer.io/wanghuaili</a></li><li><a href="https://sourcerer.io/carlomazzaferro">https://sourcerer.io/carlomazzaferro</a></li><li><a href="http://sourcerer.io/lmsanch">http://sourcerer.io/lmsanch</a></li></ol><p>We have a lot planned for the future but one specific vector we’ve been hearing about has attracted our attention immensely. Specifically, on how we can use our technology to improve engineering retention within an organization.</p><p><strong>Engineer’s are valuable, extremely valuable given this day and age.</strong> They are the most powerful workforce group in the world and their demand is easily seen with 11.6MM total tech jobs and 3.7MM job postings in the US for 2018 alone. While impressive on it’s own, another way to look at these massive numbers is to say that every third employee can walk out and get a job elsewhere. This satiation, is amazing and I can only see it increasing as time goes on and continued investments in emerging technologies grows (74% YoY).</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*sPnd_iZ8aiQAp6ES" /></figure><p>This incredible momentum in the industry unfortunately also creates an increasing challenge to retain and satisfy the existing workforce. The latest data out there is showing that the <strong>median tenure at top tech companies to be 2 years</strong>. Yes, that’s right, only 2 years and an employee is onto his or her next opportunity. The most interesting thing about this though is the cited reasons for leaving. In Hired’s, <a href="https://hired.com/blog/candidates/2019-year-of-the-software-engineer/">“2019 The year of the software engineer”</a> study they have found that while compensation is one pretty obvious cause, one of the more striking reasons is that because “40% of engineers leave to learn something new”. This is incredibly insightful, especially when you look at the <strong>cost on average to lose a tech employee is 125% of salary</strong>. This cost and frequency is a real problem and begs for a solution.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*X7BANirZjo7Ft7FP" /></figure><p>A <a href="https://www.washingtonpost.com/business/2019/04/11/new-way-your-boss-can-tell-if-youre-about-quit-your-job/?noredirect=on&amp;utm_term=.3486dea85e2f">recent article</a> in the Washington Post, highlights this issue very well and details the problem set and advancements IBM has specifically done. IBM wants to keep its employees from quitting, and it’s using artificial intelligence to do it. The CEO of IBM, Ginni Rometty said that thanks to AI using their internal “proactive retention” tool, IBM can predict with 95 percent accuracy the employees who are likely to leave in the next six months.</p><p>Further in the article, Diane Gherson, IBM’s chief human resources officer, says that they consider thousands of factors such as job tenure, internal and external pay comparisons, and recent promotions all the way to much more simple factors such as the length of an employee’s commute to discover the patterns needed to predictively determine potential losses.</p><p>While utilizing this HR “metadata” that IBM is working on is extremely fascinating, what we realized is that specifically for technology employees, an engineer’s authored source code is the best data to be analyzing. Something we already have been capturing and processing with our sourcerer.io engineering resumes.</p><p>Over the past month we have devoted some real R&amp;D efforts to explore how our existing technology can be used to solve this. The results have been enlightening and has resulted into a product that we’re calling tech talent analytics for human resource and engineering management that helps companies focus their IT retention efforts by mining the source code that their software engineers author. <strong>Our AI summarizes an engineer’s expertise, engagement, influence, and importance by observing their work in real time</strong>. We then connect these signals to compensation data and other employee metadata along with global trends to help companies map out their IT talent, identify critical contributors, plan succession, and discover training opportunities.</p><p>These retention intelligence tools are the clear future. They are automatic, real time, objective, and assessed alongside the actual skills, abilities, and habits of the workforce as compared to a world where retention is currently being monitored manually, infrequently, invasively, and subjectively.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*eIY9hwD9KU1bGSts" /></figure><p>There is obviously a lot here and still more that I have yet to capture in writing but would be very interested to hear from human resource and engineering leaders to refine our product. We are also looking for organizations that would be interested to pilot with us. As a pilot participant you’ll have 24/7 customer support for your trial integration, direct feedback &amp; customizations for your specific client needs, and preferential support for 3rd party HR system integrations.</p><p><a href="https://docs.google.com/forms/d/e/1FAIpQLSf1G9xpTO9ViZDYseXg4IteK-ds1qHS813KiVTV2-X9XtxaOw/viewform?usp=sf_link">Request access to our beta pilot program today.</a></p><p>If you happen to stumble on this post and have any interest, we’d be very eager to hear from you.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=3364c4ae3989" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/using-ai-to-keep-engineers-happy-at-work-3364c4ae3989">Using AI to keep engineers happy at work</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Using commit message standardization to enhance your release and feature management.]]></title>
            <link>https://blog.sourcerer.io/using-commit-message-standardization-to-enhance-your-release-and-feature-management-6778c4b9cd8e?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/6778c4b9cd8e</guid>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[development]]></category>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[developer-tools]]></category>
            <category><![CDATA[workflow]]></category>
            <dc:creator><![CDATA[Gwenael P]]></dc:creator>
            <pubDate>Mon, 14 Jan 2019 16:24:05 GMT</pubDate>
            <atom:updated>2019-01-14T16:24:05.067Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/884/1*VT0CBhZjYuMI71s3BpBPQw.png" /><figcaption>[Today’s random Sourcerer profile: <a href="https://sourcerer.io/gannetson?utm_source=medium&amp;utm_medium=profilelink">https://sourcerer.io/gannetson</a>]</figcaption></figure><p>During the last years, thanks to github, gitlab, bitbucket, launchpad and other products, issues have become popular even for small projects. We are now pushing, merging pull requests, having release cycles that follow semantic release standard, having our software controlled by CI and indexed into package managers. There are still ways our work of developers can be made easier and more stable through tools and processes. Today I am going to explore with you a way to deal with some side work that you already should be managing with any of your projects: dealing with changelogs.</p><p>Commit messages have always been a great way to keep track of changes in a project. However, in a lot of projects (either personal or professional ones, maintained by a company or not), commit messages were often quite messy and it was not possible to use them to propose to end users something to read. This is due to some regular problems occurring throughout the life of your project :</p><ul><li>Everybody has their own way of committing and writing commit messages. Even if there was a standard established, it is quite often that people can’t follow EXACTLY the commit message guidelines on their own. There is always a commit message which will be a bit different, because somebody forgot to put the issue number they were working on, or to specify if the commit is a fix, a new feature…</li><li>Commits are sometimes made in a hurry, or in bad conditions, and you definitely can’t provide to your end users a commit log. There will be a lot of polluting data among what you want to bring to them.</li><li>Commit messages are not tied to releases of your project. So using these messages as changelogs would not provide enough information.</li></ul><p>If you could get rid of those blocking points, you would have a perfect way to automatically log info painlessly while developing, without spending time afterwards to publish and think about the changes you made between two releases.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/612/1*hYZdN9S9SSCI5LRxz-DWsg.png" /><figcaption>A really clean commit history is the best project overview you could dream of. However it is really hard for a team to follow exactly guidelines that makes it really consistent (image source: <a href="https://www.atlassian.com/git/tutorials/using-branches">Atlassian.com</a>)</figcaption></figure><h3>Introducing Commitizen</h3><p>Commitizen is a node package that provides a way to easily standardize commit messages, and help you keeping a clear and parsable convention thoughout your project history. You will then be able to distinguish commits that :</p><ul><li>fixes an existing issue (and add a link to an issue discussion)</li><li>adds a feature (with a link to the specification)</li><li>change the build system</li><li>or any other type of change for your project.</li></ul><p>Everything is handled via a script call that shows a commit wizard into the console, that replaces the git commit command. Step by step, you will be asked for the type of commit, the commit message, the long description, and also asked to provide an issue that gets solved by the commit, and so on.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/596/1*heqHzRw_xkEGp1HL-XnFHQ.png" /></figure><p>This wizard will prepare a nice and parsable commit message for you. Using this wizard, there is no way to find a poorly formatted commit message among your commit history.</p><p>You can set up commitizen to adapt to different standards by adding configuration into the package.json of your project, or having a configuration file into the root directory of your project.</p><p>Once this is set up, you will see you commit history become more and more tidy.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/424/1*VHFudyOQk_Rtz_Kr8i1P4A.png" /><figcaption>The commit history of the commitizen/cz-cli repo on Github is looking pretty good and it now become possible o parse it to reuse info provided by commit messages.</figcaption></figure><p>Now that you can rely more on the data displayed by a git log command, you can use it to generate a changelog into your project.</p><p>There are different ways to achieve this, but <a href="https://www.npmjs.com/package/auto-changelog">Auto changelog</a> might be a good start. This node package can be used to generate a file containing a changelog. You can use it to generate a changelog.md file, using the default options.</p><pre>$ auto-changelog --output changelog.md</pre><p>You can put this in your scripts in the package.json file, and you may want to add a commit hook as well to generate your changelog. If you manage versions of your project with git releases, Auto-changelog will take care of writing your changelog entries sorting them by version.</p><p>You now have something relevant and fully automated to integrate to your project. No more changelog writing.</p><p>But wait, there is more.</p><h3>Integrating the changelog in your own app</h3><p>Having a markdown changelog is one good thing for your project. But you can also use auto-changelog to bring this data straight into your app. In this part, we will assume that you are developing a web project and integrate this information into you frontend, in a dedicated section.</p><pre>auto-changelog --template json --output changelog-data.json</pre><p>This command will generate a json document in your project, ready to be integrated in your project.</p><p>Now in your source folder, you can generate a complete page using this.</p><p>It is then conceivable to build a full interface into your webapp to display your changelog.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/369/1*e5oSyKbJsSb3k3DncsRhwQ.png" /></figure><p>You can push the implementation and go really far with that. You can think about making a full page where people can browse past version, see issues linked to commits, and even interact with those issues (see the whole discussion, add some comments, …).</p><p>In fact, if users are able to interact with your changelog, they are able to interact with the past of your product. Making them able to add some questions to changelog items would help you dealing with support-related concerns, helping them with questions they might have about the present state of your project. And if you let them see ongoing issues, you can make them able to contribute more easily, thus taking part in future developments.</p><h3>Conclusion</h3><p>Integrating this kind of workflow can cost you quite some times as it may require to set up commit hooks and install several dependencies into your project. However, you will only have to set it up once and it saves you time and efforts on parts that requires meticulousness.</p><p>As communicating with your users is a very important and often neglected part of the work for technical teams, It will give you some easy extra points on the long run.</p><p>Integrating a page to see changelog will show to your users that you care about communication, transparency. And adding a page to add and comment issues will prove them you take care about their voices.</p><h4>Resources</h4><p>The commitizen command line utility :</p><p><a href="https://github.com/commitizen/cz-cli">GitHub - commitizen/cz-cli: The commitizen command line utility. #BlackLivesMatter</a></p><p>Auto changelog, used to generate the changelog from commits :</p><p><a href="https://github.com/CookPete/auto-changelog">CookPete/auto-changelog</a></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=6778c4b9cd8e" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/using-commit-message-standardization-to-enhance-your-release-and-feature-management-6778c4b9cd8e">Using commit message standardization to enhance your release and feature management.</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[AdonisJS : a full-featured node framework for modern web servers]]></title>
            <link>https://blog.sourcerer.io/adonisjs-a-full-featured-node-framework-for-modern-web-servers-93532e3b36af?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/93532e3b36af</guid>
            <category><![CDATA[laravel]]></category>
            <category><![CDATA[javascript]]></category>
            <category><![CDATA[nodejs]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[php]]></category>
            <dc:creator><![CDATA[Gwenael P]]></dc:creator>
            <pubDate>Mon, 10 Dec 2018 01:18:07 GMT</pubDate>
            <atom:updated>2018-12-10T13:35:50.247Z</atom:updated>
            <content:encoded><![CDATA[<h3>AdonisJS : a full-featured node framework for modern web servers</h3><p>Node is becoming one of the most elected choices by developers for modern web servers. You can build web servers in various elegant ways, using up-to-date tools such as ExpressJS.</p><p>However, a lot of developers had a hard time to stick on a high level framework to build web servers for their applications, sticking with tools that will stay light but will require a lot of configuration and wiring to produce a complete setup for a large project.</p><p>Today, we are going to focus on a higher-level framework that comes with batteries included, and that will allow you to implement advanced features easily.</p><h3>Meet AdonisJS</h3><p>AdonisJS is a node framework inspired by the well-known Php framework Laravel. It relies on concepts such as dependency injection and service providers to make you design beautiful, reliable and easily testable code.</p><p>Get prepared to leave spaghetti code in favor of reusable OO structure.</p><h3>Installation</h3><p>First things first, to create a new AdonisJS server, you will need to install Adonis CLI, the command line tool that will help you manage your projects :</p><pre>npm i -g @adonisjs/cli</pre><p>This will provide you a “adonis” command to use in your terminal</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/565/1*k_X1UG65sbsVWrcl8EPRqw.png" /></figure><p>To create a new adonis application, use the subcommand “new” :</p><pre>adonis new my-adonis-server</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/427/1*rP2hc2KgGFZG9xiwt5Am-A.png" /></figure><p>As explained in the log, is uses the default fullstack-app template (adonisjs/adonis-fullstack-app), to create your project in a “my-adonis-server” folder.</p><p>You can now go to this folder and start serving your app in development mode :</p><pre>cd my-adonis-server<br>adonis serve --dev</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/477/1*c8YfNu6o-Fp4xERBUUK7Ew.png" /></figure><p>Your app is now served from your machine, you can now dive into your project !</p><p>Keep the server running for the rest of the introduction.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/680/1*zxayjE2upB2_96fdp-gGXg.png" /></figure><h3>A bit of modeling</h3><p>Let’s now create one API to manage a resource. In the following chapter, we will create an endpoint to manage tasks.</p><p>First, let’s setup the project so it uses sqlite :</p><pre>adonis install sqlite3</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/511/1*UnX3FpJ9nrAC7yMgj_9g5Q.png" /></figure><p>Then create the Task SQL table :</p><pre>adonis make:migration tasks<br># select &quot;create table&quot;<br>adonis migration:run</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/533/1*LK_rr4KPgbdZItDVWTnPHA.png" /></figure><p>Note that every time you want to edit your database, you will have to create a migration.</p><blockquote>Migrations will allow you to save every modification you are doing to your models, and will update data to follow the new formats and rules you are implementing.</blockquote><blockquote>It is possible to define a “up” method, and a “down” method, depending if you want to switch to go to a newer or older version.</blockquote><p>Then, create the model and controller related to this table :</p><pre>adonis make:model Task<br>adonis make:controller TaskController<br># select &quot;For HTTP requests&quot;</pre><figure><img alt="" src="https://cdn-images-1.medium.com/max/590/1*H-W7QL3rAON7Jps-exDgXA.png" /></figure><p>You now have set up you database, created a table, a model, and a controller. The source code for the model and the controller is present on the app folder.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/198/1*_0bkw9UdCyCWLOH12GWD0g.png" /></figure><p>You might have noticed that as you entered the last commands, the server automatically detected that your project changed, and took care of reloading the new setup.</p><p>Now we can add some content to our model. By creating another migration.</p><pre>adonis make:migration tasks</pre><p>Now edit the new file under database/migrations</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/9c620de5110186cc363ca0760f877e77/href">https://medium.com/media/9c620de5110186cc363ca0760f877e77/href</a></iframe><p>And run this migration</p><pre>adonis migration:run</pre><p>Now, you tasks models have a title, description, and done properties.</p><h3>Creating a page displaying content</h3><p>Now let’s create a page that displays a list of tasks.</p><pre>//Route.on(‘/’).render(‘welcome’)<br>Route.get(&#39;/&#39;, &#39;TaskController.home&#39;)</pre><p>Then, on your TaskController, add the code handling the “/” route.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/4a2891782c6b7973bc4866aa04f71ec4/href">https://medium.com/media/4a2891782c6b7973bc4866aa04f71ec4/href</a></iframe><p>And add the template of the page in “resources/views/tasklist.edge”</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/edfc1f8a6002b68558fc239809a60083/href">https://medium.com/media/edfc1f8a6002b68558fc239809a60083/href</a></iframe><p>In the “public/style.css”, delete all the css rules, to put :</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/757913ec611022e4969cf28613f288ad/href">https://medium.com/media/757913ec611022e4969cf28613f288ad/href</a></iframe><p>This will display an empty list of tasks on “localhost:3000/” (so basically, nothing at the moment !)</p><p>It is currently empty because there is no tasks at the moment on the database. Let’s fix this!</p><h3>Creating tasks in your database</h3><p>For the sake of this tutorial, we will create our first tasks on the TaskController method we already defined:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/f4905e9560497b30931bb6dd128b482c/href">https://medium.com/media/f4905e9560497b30931bb6dd128b482c/href</a></iframe><p>Load the tasklist once to insert your tasks in the database, then erase or comment those methods.</p><p>You should now see your tasks on the page.</p><h3>Creating new tasks</h3><p>Once this working, we would like to add new content via a new task form.</p><p>On your task controller, enter the task creation logic :</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/7c1147e91a2b8feedafcee3c17950a9d/href">https://medium.com/media/7c1147e91a2b8feedafcee3c17950a9d/href</a></iframe><p>The task creation form in your tasklist template :</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/116af5452323dccbf61958fa72bdafdb/href">https://medium.com/media/116af5452323dccbf61958fa72bdafdb/href</a></iframe><p>And the post route in “start/routes.js”</p><pre>Route.post(‘/task/create’, ‘TaskController.create’)</pre><p>Now you can add tasks from the form displayed in your task list.</p><h3>Deleting Tasks</h3><p>To delete tasks, the implementation is pretty much the same as the one we did to create tasks :</p><ul><li>Add a delete method in the controller</li><li>Adapt the template</li><li>Create the DELETE route</li></ul><p>TaskController :</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1294a85a64671c1297a2f5378aa62ae8/href">https://medium.com/media/1294a85a64671c1297a2f5378aa62ae8/href</a></iframe><p>“resources/views/tasklist.edge”</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/739146858c0dc7f7fc8c7aec7db3ef27/href">https://medium.com/media/739146858c0dc7f7fc8c7aec7db3ef27/href</a></iframe><p>“start/routes.js”</p><pre>Route.get(‘/task/delete/:id’, ‘TaskController.delete’)</pre><h3>A bit of templating</h3><p>Our application is going to grow. In order to reuse some parts of the html, we are going to define a main layout, and include the specific code of each page into it.</p><p>Create a “layout_main.edge” file into “resources/views”. This file will include the base of our page, and will be used by each page we create.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/acae728c154593270a48ee8afaa213cf/href">https://medium.com/media/acae728c154593270a48ee8afaa213cf/href</a></iframe><p>Now you can refactor tasklist.edge</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d07536bf9e01f2c2eefd23f8d2d295a6/href">https://medium.com/media/d07536bf9e01f2c2eefd23f8d2d295a6/href</a></iframe><h3>Authentication</h3><p>You have probably already seen that there are some files to manage users in your project (“app/Models/User.js”)</p><p>First, let’s add a UserController :</p><pre>#Choose &quot;For HTTP requests&quot;<br>adonis make:controller UserController</pre><p>Go to the router (“start/routes.js”) and some routes :</p><ul><li>two routes for the login and register process, displaying the templates with the forms</li><li>two routes for receiving login and register data and handling user creation and login</li><li>One route for the logout process</li></ul><pre>Route.on(&#39;/register&#39;).render(&#39;register&#39;)<br>Route.on(&#39;/login&#39;).render(&#39;login&#39;)</pre><pre>Route.post(&#39;/register&#39;, &#39;UserController.create&#39;)<br>Route.post(&#39;/login&#39;, &#39;UserController.login&#39;)</pre><pre>Route.get(&#39;/logout&#39;, &#39;UserController.logout&#39;)</pre><p>Then, add the templates :</p><p>“resources/views/login.edge”</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/40586aca23845f2bde311a434a0e8372/href">https://medium.com/media/40586aca23845f2bde311a434a0e8372/href</a></iframe><p>“resources/views/register.edge”</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d612624d6ec7e720ce89c756a8a87a58/href">https://medium.com/media/d612624d6ec7e720ce89c756a8a87a58/href</a></iframe><p>“Controllers/http/UserController.js”</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/983908ff5a353afe203c0cfcbe542b6b/href">https://medium.com/media/983908ff5a353afe203c0cfcbe542b6b/href</a></iframe><p>You can also modify the tasklist template to add the register and login links, and a logout link if the user is logged in.</p><p>You can also include the create task form into the loggedIn conditionnal, in order to prevent anonymous users to create tasks.</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5bbabecd09f90501162123d6fc11bee1/href">https://medium.com/media/5bbabecd09f90501162123d6fc11bee1/href</a></iframe><h3>More features</h3><p>We now have seen what a basic and naive approach to build a web app can look like. You might have thought about a lot of other features to include in the project. Here is a quick list of other things Adonis can provide to you :</p><h4>Relations between models</h4><p>You want the tasks to be owned by users, you can create relations between models with the following syntax :</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/a46ad20893d0bde361130fa930563496/href">https://medium.com/media/a46ad20893d0bde361130fa930563496/href</a></iframe><p>You can also use hasOne, belongsTo, belongsToMany and manyThrough.</p><p>See the docs for more details <a href="https://adonisjs.com/docs/4.0/relationships">https://adonisjs.com/docs/4.0/relationships</a></p><h4>Validators</h4><p>You can use validators to check is the data flowing into your controllers has the right format, and emit messages (via session.flash for instance) when some errors occurs.</p><p>Validators are a 3rd party npm module : <a href="https://www.npmjs.com/package/adonis-validator">https://www.npmjs.com/package/adonis-validator</a></p><h4>Using websocket instead of HTTP requests</h4><p>As you might have seen when creating controllers, you can also generate controllers that are designed to use websockets.</p><p>More info here : <a href="https://adonisjs.com/docs/4.1/websocket">https://adonisjs.com/docs/4.1/websocket</a></p><h4>Internationalization</h4><p>A full guide to make you app multilanguage is available on the Adonis docs as well :</p><p><a href="https://adonisjs.com/docs/4.1/internationalization">https://adonisjs.com/docs/4.1/internationalization</a></p><h3>Conclusion</h3><p>Adonis is a great choice for those who need a full-featured web server framework, and who need to keep control over your implementation.</p><p>This framework could be of a great use if you want to kickstart a project, if you want to follow usual guidelines and concepts. It will help you to implement data migrations, keep your code clean, handle data validation…</p><p>However, integrating exotic libraries can be painful. Adonis extensions need to be specifically built for this framework. This would be perfect for Adonis and it’s users if the framework would be in a monopoly situation, which is not the case.</p><p>This guide was just covering the basics of Adonis, and there is still a lot to write about. If you enjoyed this guide, I encourage you to go to see the official Adonis website.</p><p><a href="https://adonisjs.com/">https://adonisjs.com</a></p><p>I personally think that this framework can provide a really nice way to deal with a lot of problems that can appear while developing a big project. I really like the way data migration is handled, and how you can easily split your code to avoid having a messy codebase.</p><p>I feel a bit concerned about documentation and modularity though; guides and API reference are present are nicely managed, but as every adonis extension is designed only for this very framework, it make me feel concerned about having a lot of specific guides to read to be able to use add-ons, and those guides won’t help me with another framework.</p><p>And you? What are your impressions on Adonis? Would you rather use a full-featured framework with all batteries included, or do you prefer the modulable Express? Feel free to comment!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=93532e3b36af" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/adonisjs-a-full-featured-node-framework-for-modern-web-servers-93532e3b36af">AdonisJS : a full-featured node framework for modern web servers</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[GO ValueObject]]></title>
            <link>https://blog.sourcerer.io/go-valueobject-19ea273f9056?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/19ea273f9056</guid>
            <category><![CDATA[design-patterns]]></category>
            <category><![CDATA[architecture]]></category>
            <category><![CDATA[golang]]></category>
            <category><![CDATA[go]]></category>
            <category><![CDATA[value-objects]]></category>
            <dc:creator><![CDATA[V. K.]]></dc:creator>
            <pubDate>Tue, 04 Dec 2018 14:29:26 GMT</pubDate>
            <atom:updated>2019-02-19T07:29:26.363Z</atom:updated>
            <content:encoded><![CDATA[<h3>Go ValueObject</h3><h4>Simple example how to use Value Object in your go project</h4><figure><img alt="" src="https://cdn-images-1.medium.com/max/960/1*JY4ID7rqag3XoFpzY1A2Gw.jpeg" /><figcaption>This solution definitely will work and will pass all checks, validations, inspections.</figcaption></figure><p>Let’s consider super simple example how you can use Value Object in your go project.</p><h4>Prerequisites</h4><p>Suppose we have simple project and we have to implement user registration form, for now we care only about 2 fields: name and email.<br>Super simple case so far…<br>And with this created user we have to do 3 actions: save user in db (obviously), send welcome email and add user into search.<br>Still nothing difficult…</p><h4>Preparation</h4><p>With thoughts in mind about: separation of concerns, reusability, high cohesion, low coupling, and single responsibility we will create 3 services (or components or modules or bridges to separated microservices) which will have names:</p><ul><li>user persistence</li><li>mailer</li><li>user search</li></ul><p>Names may be differ, but I’m sure you’ve got the gist of all these services.<br>Why do we need this? Couple of reasons:</p><ol><li>Because user may be created in db, may be updated, may be deleted from db, etc. — whole bunch of stuff related only to user persistence in database.</li><li>Because we need to send not only welcome email, but also confirm email, or halloween greeting, etc. — whole bunch of stuff related only to emails.</li><li>Because we need to add user into search, delete user from search, update user, etc. — only search related stuff.</li></ol><h4>Implementation #1</h4><p>For first simple implementation we may have something like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/c5670ae36d33b2c494c332f1eb68fd59/href">https://medium.com/media/c5670ae36d33b2c494c332f1eb68fd59/href</a></iframe><p>After glance it looks pretty common… nothing ugly or dangerous so far…<br>Here we just obtain name and email from params or request and pass these data to all downstream services.<br>But, hold on… <br>We have to validate name and email in function CreateNewUser to send response with errors in case of invalid data. And we have to validate name and email in function db.SaveUser because this function might be called from different places (create, update, delete, create after oauth2 login, etc.) and there is no guarantee about data validity, therefore have to validate.<br>Same with mailer.SendEmail it may be welcome email or reset password email from another service, or personalized marketing email, or email about suspending account by moderation team which is called from another internal service without validation, etc. — same situation with data validity….<br>And same situation with search.AddUser maybe devops team is creating new search cluster and call this service from shell script to put all users into search, who knows about data validity… have to validate.</p><p>And it comes down to situation when we have same validation in many places — it’s bad. Moreover, suppose business team decided to add one more field: age or country or phone number… we have to update all validations and all functions signatures — it’s very bad.<br>Value Object to the rescue!</p><h4>Implementation #2</h4><p>It isn’t gorgeous invention to say that we can put all fields related to create new user process in one struct, like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/e01ed1fcccfeeb6159deef2d584e39bc/href">https://medium.com/media/e01ed1fcccfeeb6159deef2d584e39bc/href</a></iframe><p>And let’s add one more addition to this struct: this struct must carry only valid data, and must <strong>ensure </strong>that<strong> it contains 100% valid data</strong>!<br>Let’s see how it may be bone:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/5011df99f99fb18a5fe255a833a2bb3e/href">https://medium.com/media/5011df99f99fb18a5fe255a833a2bb3e/href</a></iframe><p>No big difference here, added errors to hold all validation errors for all fields and to provide all errors in one call. Also name and email non-exported — it ensures that no one will create value object by doing something like this: vo := Instance{Name: “invalid email”}</p><p>To create new value object you have to use New function:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6befec5227385c898e2666b18a6c7045/href">https://medium.com/media/6befec5227385c898e2666b18a6c7045/href</a></iframe><p>It’s the only way to create value object<br>(technically it’s possible to do vo := Instance{} and use it downstream, but this blank value object will produce development time errors, hence it is useless). In this example New function receives map with fields name and email but you can pass JSON string or even request body or query string or something else, it’s up to you how to get data into value object.<br>Main purpose of this function — perform validation to provided data, for this we have vo.initName(data) and vo.initEmail(data), also these functions perform assignment of valid data to struct’s fields. You may perform extra actions here, if you wish (convert from one data type to another and so on).<br>Also it’s important to admit that this function returns value not pointer to struct, it’s done intentionally with purpose to provide immutability. Due to this once validation performed successfully — value object contains only valid data (and can’t be changed).<br>Value object must have something like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/1455213b189cfd5c842b830af82c79f8/href">https://medium.com/media/1455213b189cfd5c842b830af82c79f8/href</a></iframe><p>Here we have super simple validation, but you may use external library here or create own to validate data provided by user. Also you may convert data from one data type to another, or assign default value.</p><p>And last part — getters:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/88ba91fc21ecc53b2147686bfbb33bdb/href">https://medium.com/media/88ba91fc21ecc53b2147686bfbb33bdb/href</a></iframe><p>Getters little bit boring, but as far as we have non-exported properties we have to have these getters, and the good thing about getters — they tiny and super simple.</p><p>Let’s see whole value object:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/d11388f8f85fbc47d0e3c36eb6797a0a/href">https://medium.com/media/d11388f8f85fbc47d0e3c36eb6797a0a/href</a></iframe><p>Hope you find that whole value object looks simple, concise and clear.<br>There is no complicated or confusing stuff. And it’s super simple to use, re-use, cover with tests, extend and maintain this value object.</p><p>And the good part — our main function CreateNewUser now may looks like this:</p><iframe src="" width="0" height="0" frameborder="0" scrolling="no"><a href="https://medium.com/media/6a74fd42e1bd7dd1809ca0f08aeeec99/href">https://medium.com/media/6a74fd42e1bd7dd1809ca0f08aeeec99/href</a></iframe><p>Benefits of doing so:</p><ul><li>Value object alway valid and alway contains valid data.</li><li>We have map of all validation errors in case when provided invalid data into value object.</li><li>Value object immutable.</li><li>We can loosely use this value object downstream without validation and other repetitive stuff.</li><li>In case we need to add one more parameter — we don’t have to change all functions signatures.</li><li>We have separation of concerns and follow single responsibility principle and many other nice principles.</li></ul><h4>Conclusion</h4><p>Hope you find value object as good idea and will use it in your project.<br><br>Also you may check out <a href="https://github.com/cn007b/monitoring/blob/efa5bd3ca7feee2a57c7d95d24d9de012dc87022/src/go-app/app/vo/ProjectVO/instance.go#L22">this</a> value object from real life <a href="https://github.com/cn007b/monitoring">project</a> and see the whole picture, and how all project’s layers looks together (<a href="https://github.com/cn007b/monitoring/blob/efa5bd3ca7feee2a57c7d95d24d9de012dc87022/src/go-app/app/vo/ProjectVO/instance.go#L22">value object</a>, <a href="https://github.com/cn007b/monitoring/blob/61a717d8031a4f2b29c0dc4149c40e93de327d75/src/go-app/controller/api/projects/projects.go#L19">controller</a>, <a href="https://github.com/cn007b/monitoring/blob/4cfecc738090728987e284234674004f52e543b5/src/go-app/service/project/project.go#L18">service</a> and <a href="https://github.com/cn007b/monitoring/blob/61a717d8031a4f2b29c0dc4149c40e93de327d75/src/go-app/service/internal/datastore/Project/dao.go#L29">database</a>).<br>With this example truly believe you admit that value objects help to write more clear code and help focus on important stuff but not on validation stuff.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=19ea273f9056" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/go-valueobject-19ea273f9056">GO ValueObject</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Turning Bugs into Gems: Debugging Ruby Applications]]></title>
            <link>https://blog.sourcerer.io/turning-bugs-into-gems-debugging-ruby-applications-314ff869a611?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/314ff869a611</guid>
            <category><![CDATA[debugging]]></category>
            <category><![CDATA[ruby]]></category>
            <category><![CDATA[ruby-on-rails]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[programming]]></category>
            <dc:creator><![CDATA[Robert W. Oliver II]]></dc:creator>
            <pubDate>Thu, 08 Nov 2018 02:08:33 GMT</pubDate>
            <atom:updated>2018-11-08T02:08:32.834Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/960/0*uE0cEsloRFfK1RVe.jpg" /><figcaption>[Today’s random Sourcerer profile: <a href="https://sourcerer.io/yuki24?utm_source=medium&amp;utm_medium=profilelink">https://sourcerer.io/yuki24</a>]</figcaption></figure><p>Ruby is a beautiful language. It’s worst, and best quality is that once you use it, you likely won’t want to use anything else.</p><p>That’s good because you can do nearly anything with it. It’s bad because it’s not as commonly used as it should be.</p><p>Ruby’s approach to treating everything as an object and meta-programming abilities give it an array of immensely powerful debugging tools and techniques.</p><p>Whether you are a beginner or an experienced Ruby developer, working on Ruby applications or Ruby on Rails websites, I aim to share some helpful, late-night-saving tips that will help you find those pesky bugs in your Ruby applications.</p><h3>Inspector Ruby</h3><p>The “inspect” method is built into every class and is incredibly useful in peering inside objects. Here’s a trivial example:</p><pre>irb(main):001:0&gt; a = &quot;Hello, World!&quot;<br>=&gt; &quot;Hello, World!&quot;<br>irb(main):002:0&gt; a.inspect<br>=&gt; &quot;\&quot;Hello, World!\&quot;&quot;</pre><p>In this case, inspect works much like var_dump in PHP. Let’s look at an array:</p><pre>irb(main):001:0&gt; a = [“Hello”, “World”]<br>=&gt; [“Hello”, “World”]<br>irb(main):002:0&gt; a.inspect<br>=&gt; “[\”Hello\”, \”World\”]”</pre><p>Helpful, but for even better output in JSON, require the json module and use .to_json on any object:</p><pre>[1] pry(main)&gt; require &#39;json&#39;<br>=&gt; true<br>[2] pry(main)&gt; a = [&quot;Hello, World!&quot;]<br>=&gt; [&quot;Hello, World!&quot;]<br>[3] pry(main)&gt; a.to_json<br>=&gt; &quot;[\&quot;Hello, World!\&quot;]&quot;</pre><h3>Monkey Patching to the Rescue</h3><p>Ruby allows for monkey patching — a goofy name for an incredibly powerful technique. At any time during program execution, you can augment existing code by simply redefining it. Consider the following code:</p><pre>#!/usr/bin/env ruby</pre><pre>class Demo<br>  def initialize<br>    puts &quot;Hello, World!&quot;<br>  end<br>end</pre><pre>a = Demo.new</pre><pre>class Demo<br>  def initialize<br>    puts &quot;Goodbye, World!&quot;<br>  end<br>  def wave<br>    puts &quot;*waves*&quot;<br>  end<br>end</pre><pre>b = Demo.new</pre><pre>b.wave<br>a.wave</pre><p>In this example, we see that the Demo class acts as expected when we create a new object named “a”. But then we “redefine” the class, both changing the initialize method and adding a new method. When we create a new object named “b” it runs the initialize method with the new text.</p><p>However, note that object “a” takes on the “wave” method as well, even though it is already defined. Ruby’s monkey patching allows for surgery on any class even if it already has existing objects created from that class.</p><p>How does this help us with debugging? By being able to insert new methods or redefine existing methods in classes we can add debugging printouts or override variables or tests on said variables. You can even override methods to include a logging function to log the details of objects and the actions taken upon them to disk.</p><p>Being able to do this at any time is useful, but why not put this meta-programming power to even better use with on-demand introspective debugging.</p><p>There’s a tool to do just that — Pry.</p><h3>Prying Into Your App</h3><p><a href="https://github.com/pry/pry">Pry</a> is an incredible tool for debugging. It’s a full fledged replacement for <em>irb</em> that is great for standalone use or can be easily embedded into your project.</p><p>Pry treats scopes like directories, allowing the <em>cd</em> command to be used to navigate between scope layers, and <em>ls </em>to list instance and local variables:</p><pre>[1] pry(main)&gt; class Demo<br>[1] pry(main)*   <a href="http://twitter.com/a">@a</a> = &quot;This is a demo.&quot;<br>[1] pry(main)* end  <br>=&gt; &quot;This is a demo.&quot;<br>[2] pry(main)&gt; cd Demo<br>[3] pry(Demo):1&gt; ls<br>instance variables: <a href="http://twitter.com/a">@a</a><br>locals: _  __  _dir_  _ex_  _file_  _in_  _out_  _pry_<br>[4] pry(Demo):1&gt; cd</pre><p>True to Ruby’s monkey-patching nature, you can use Pry to edit your code within the execution of your program. Here’s an example:</p><pre>[1] pry(main)&gt; class Demo<br>[1] pry(main)*   <a href="http://twitter.com/a">@a</a> = &quot;This is a demo.&quot;  <br>[1] pry(main)* end  <br>=&gt; &quot;This is a demo.&quot;<br>[2] pry(main)&gt; cd Demo<br>[3] pry(Demo):1&gt; ls<br>instance variables: <a href="http://twitter.com/a">@a</a><br>locals: _  __  _dir_  _ex_  _file_  _in_  _out_  _pry_<br>[4] pry(Demo):1&gt; <a href="http://twitter.com/a">@a</a><br>=&gt; &quot;This is a demo.&quot;<br>[5] pry(Demo):1&gt; <a href="http://twitter.com/a">@a</a> = &quot;Hello, World&quot;<br>=&gt; &quot;Hello, World&quot;<br>[6] pry(Demo):1&gt; ls<br>instance variables: <a href="http://twitter.com/a">@a</a><br>locals: _  __  _dir_  _ex_  _file_  _in_  _out_  _pry_<br>[7] pry(Demo):1&gt; <a href="http://twitter.com/a">@a</a><br>=&gt; &quot;Hello, World&quot;<br>[8] pry(Demo):1&gt; cd ..<br>[9] pry(main)&gt;</pre><p>Note at the end I used “cd ..” to move “up” out of the “Demo” class scope and back to the default scope.</p><p>The “show-method” function allows you to display the source code of a method:</p><pre>[1] pry(main)&gt; class Demo<br>[1] pry(main)*   def initialize<br>[1] pry(main)*     puts &quot;Hello, World!&quot;<br>[1] pry(main)*   end  <br>[1] pry(main)* end  <br>=&gt; :initialize<br>[2] pry(main)&gt; show-method initialize<br>Error: Cannot locate this method: initialize.<br>[3] pry(main)&gt; cd Demo<br>[4] pry(Demo):1&gt; show-method initialize</pre><pre>From: (pry) @ line 2:<br>Owner: Demo<br>Visibility: private<br>Number of lines: 3</pre><pre>def initialize<br>  puts &quot;Hello, World!&quot;<br>end<br>[5] pry(Demo):1&gt;</pre><p>Note that the first command of “show-method” didn’t work because I wasn’t in the “Demo” scope.</p><p>If you install the “pry-doc” gem, you can see the C source code for built-in objects:</p><pre>[1] pry(main)&gt; show-method String#puts</pre><pre>From: io.c (C Method):<br>Owner: Kernel<br>Visibility: private<br>Number of lines: 8</pre><pre>static VALUE<br>rb_f_puts(int argc, VALUE *argv, VALUE recv)<br>{<br>    if (recv == rb_stdout) {<br> return rb_io_puts(argc, argv, recv);<br>    }<br>    return rb_funcallv(rb_stdout, rb_intern(&quot;puts&quot;), argc, argv);<br>}<br>[2] pry(main)&gt;</pre><p>This is extremely useful when trying to debug behavior with a built-in method.</p><h3>Adding Pry to an Existing Ruby Program</h3><p>If you enjoy games, especially text-based games, and you haven’t read about <a href="https://blog.sourcerer.io/legend-of-the-sourcerer-a-text-based-adventure-game-in-ruby-9220b385ca1e">Legend of the Sourcerer</a>, go do it now. Open it in a new window. I’ll wait.</p><p>Neat, huh? Since it’s command-driven, we can add a command “~” that will break out into a Pry session.</p><p>In LOTS, you can use Pry to examine current game variables and even edit the map. While useful for cheating, you could also refer to this handy console when trying to debug new features you add to the game.</p><p>Pry integration into LOTS is in the current version on GitHub. <a href="https://github.com/sourcerer-io/lots/commit/fb9cdf9e6770af17b309aafd2fe4379c08d873af">Here’s the commit where I integrated Pry</a>. But adding Pry to an existing project is as simple as adding:</p><pre>require &#39;pry&#39;</pre><p>Then put:</p><pre>binding.pry</pre><p>Where you’d like to invoke Pry. When this line executes, the session will start immediately.</p><h3>Using Pry with Rails</h3><p>There is a g<a href="https://github.com/rweng/pry-rails">em which handles Pry integration into Rails</a> for you. If you’d rather not change your app and just want to use the Pry console with Rails, you can do so by running</p><pre>pry -r ./config/environment</pre><p>This will start a Pry session with your Rails app loaded. Navigate your application class structure with ease with all of that Pry debugging goodness at your fingertips.</p><h3>Deep in the Ruby Mines</h3><p>I hope you’ve enjoyed and gleaned some helpful hints from this look into debugging Ruby programs. Armed with Ruby’s powerful meta-programming and introspection and the awesomely astounding Pry gem, your bugs don’t stand a chance.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=314ff869a611" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/turning-bugs-into-gems-debugging-ruby-applications-314ff869a611">Turning Bugs into Gems: Debugging Ruby Applications</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Building web accessibility in 2019]]></title>
            <link>https://blog.sourcerer.io/building-web-accessibility-in-2019-b4bf16ef5754?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/b4bf16ef5754</guid>
            <category><![CDATA[coding]]></category>
            <category><![CDATA[accessibility]]></category>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[web-design]]></category>
            <category><![CDATA[web-development]]></category>
            <dc:creator><![CDATA[Alexander Surkov]]></dc:creator>
            <pubDate>Mon, 22 Oct 2018 00:01:14 GMT</pubDate>
            <atom:updated>2018-10-22T00:11:55.977Z</atom:updated>
            <content:encoded><![CDATA[<h3>Building web accessibility for 2019</h3><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/1*mckEDgq4dmr4_zh2GNqKmw.jpeg" /><figcaption>[Today’s random Sourcerer profile: <a href="https://sourcerer.io/justin0022?utm_source=medium&amp;utm_medium=profilelink">https://sourcerer.io/justin0022</a>]</figcaption></figure><p>Today’s web content is amazingly rich. It varies from standard HTML to complex web apps full of media; such as animation, data visualization, video games, mixed reality and VR, to name a few. Such content is often inaccessible or poorly accessible, which means a broken user experience for those who rely on assistive technologies to interact with web sites. This experience, can be annoying at its best to empty web pages as its worst. Sometimes web authors even provide alternative content for these users. Needless to say, it’s not always equivalent. What alternative content for a video game is out there?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/416/0*y5gFH0xaZNIrkAPV" /><figcaption>HTML5 and standards beyond</figcaption></figure><p>Websites are built inaccessible not only because rich media content is highly dynamic which makes it not trivial to describe its semantics but also because there’s no proper technology on the web to express such content semantics to the assistive technologies. You may wonder why, with all the technologies to create web content out there, why we do not have one to make it accessible? Here’s an explanation.</p><h3>WEB STANDARDS</h3><p>A key web technology to make content accessible is ARIA. ARIA stands for the <a href="https://www.w3.org/WAI/standards-guidelines/aria/">Accessible Rich Internet Application</a>. This is a web standard, which defines a collection of attributes you can place on a DOM element to specify role, properties and state. For example, &lt;div role=”button” aria-disabled&gt; will turn the HTML:div element into a disabled button semantically, what makes the assistive technologies users to perceive the div element as a button.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/468/0*XLCmpDACzSATscgD" /><figcaption>WAI-ARIA: developers — content — assistive tech— users</figcaption></figure><p>It’s fair to say that ARIA is generally recognized as a complete set of semantic annotations available to web developers. In other words ARIA shapes the web author’s vocabulary when it comes to accessibility.</p><p>ARIA is a remarkable technology indeed, and it has a good story of success. It enables a great number of websites to be accessible. There’s one important bit in this story though: ARIA vocabulary replicates HTML semantics at large. For sure, it provides a bunch of extra cool things, such as grid or tree controls, but it doesn’t let you go very far beyond.</p><p>So what If you have a use case that falls out standard HTML? First, you can work on alternative content that can be mapped into HTML. You may succeed or not, which depends on complexity of your case. And second, you can propose an ARIA extension. It’s not as bad as it may sound. There are good precedents, for example, <a href="https://www.w3.org/blog/2016/12/dpub-aria-1-0-is-released-as-a-candidate-recommendation/">ARIA-DPUB</a> extension. I know it is a slow process, but it works. You’d better have a good weight in the industry though to make it happen.</p><h3>PLATFORM APIs: GLANCE AT HISTORY</h3><p>ARIA vocabulary is not the only problem. There’s another part of puzzle, which is no accessibility API was ever designed to expose such unalike content to the assistive technologies.</p><p>All accessibility APIs take their roots a couple of decades ago, when inaccessible JavaScript controls was a unique challenge to solve. Over time HTML has matured and evolved into HTML5, acquiring a hefty amount of new features. It urged accessibility API vendors to adjust APIs to make the new features accessible. Also some efforts were made to make graphical content accessible, such as <a href="https://css-tricks.com/accessible-svgs/">SVG</a> and <a href="https://developer.mozilla.org/en-US/docs/Web/API/Canvas_API/Tutorial/Hit_regions_and_accessibility">HTML canvas</a> drawings. However most, if not all initiatives, were purely inspired by HTML-like use cases.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/681/0*vmBh1ZpVMLdRhe_j" /><figcaption>Accessibility APIs</figcaption></figure><p>In case of non HTML content, for example, math, no breakthrough happened. It seems no API vendors made any decent moves to address the problem at large. Agreed, there were some notable exceptions like <a href="https://developer.apple.com/documentation/appkit/accessibility">OS X API</a>, which has got extra properties for math, but you know the exception just proves the rule.</p><p>You can say it happens for a reason, such as there’s no great interest in a such low volume content — indeed, it requires to make vicious effort to push things forward and there’s always a bigger fish to fry. But the fact remains: the web authors expectations weren’t met. In consequence web authors tried to reuse existing know-hows to expose math content, which was mostly about generating human readable annotations, something like, “a plus b divided by zero”. Some screen readers started to parse MathML on their own in order to extract semantics. Needless to say, it is rather hacky workarounds, than neat solutions.</p><p>Admittedly, there has been good progress in making web accessible over the last decade. However, overall tendency was about bringing existing expertise to new platforms and technologies. It allowed to keep traditionally accessible content accessible, but it mostly left unaddressed all other kinds of content, both long-time acquaintances and newbies, popped out several years ago and sprouted throughout the modern web.</p><h3>ACCESSIBILITY CONCEPTS</h3><p>All accessibility APIs are built around similar concepts if not to say the same concepts. These concepts serve to allow the authors to describe and identify blocks of content and connect the blocks to each other. The browser exposes these blocks to the assistive technologies via accessibility APIs, and the assistive technologies transform them into a format that the users may perceive, understand and operate with.</p><p>Here’s a short overview of accessibility concepts for the sake of providing context. You may scroll down to the end of this chapter, if you feel bored :)</p><p>Role is a cornerstone concept in semantic description of a block. It is a primary characteristic of a block and serves to identify block’s type. For example, this is a button or this is a paragraph. It also scopes set of possible properties and states and defines behavioral patterns. For example, this is a selected menu item or this is a text field that can be focused. Or this is a grid cell; it has row and column indices, which helps the user to navigate the grid. Also a block may have a human readable label — the first thing the user will hear, when the focus lands on the block.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*Ca7dPHtiMhdQkZo1" /><figcaption>Accessibility concepts</figcaption></figure><p>Another key concept is relations, which describe connections between the blocks, i.e. how the blocks relate to each other. For example, most of the blocks are connected via parent-child relations like grid and its grid cells. Or an error message, shown if the user mistyped a password, is connected to a password field. Relations allow the user to navigate back and forth between blocks depending on content type and its state at user’s discretion.</p><p>Last but not least concept is actions. It is behavioral concept, used to describe interactions the user may take on a block. If this is a button, for example, then you can click it and so on.</p><p>A long story short, a block is described by role, states and attributes, which allows the user to perceive the block. A block’s actions allow the user to interact with content, and relations are for navigation.</p><p>Also, web content may change over time, which means the blocks can be created or removed or altered, as well as relations between them. All the changes are piped to the assistive technologies via eventing mechanism of accessibility APIs. This is not a 100% separate accessibility concept, but it is an integral part worth to mention for the sake of further conversation.</p><h3>UNIVERSAL LANGUAGE</h3><p>You may find it surprising, but these concepts are universal enough to describe almost any kind of content, you can see on the web today. Indeed, as long as you can describe the content by words, you can fit these verbal descriptions into accessibility concepts — all you need is to name objects, list their properties, define relations, and to explain how to interact.</p><p>Need an example? Sure. Say, you have a pie chart — this is something that falls out standard HTML, and thus you typically have to use bunch of tricks to make it accessible. However the concepts make a perfect match here. Indeed, a pie itself is a role, its title is a label. A pie chart consists of sectors, each of them also has a title, and all sectors are connected by relations, defining which one goes next. So the user can read a pie chart by navigating all sectors one by one.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*gtCyRPFwG5RZLtnA" /><figcaption>Making a pie chart accessible</figcaption></figure><p>Or let’s consider a video game, where a teddy bear teaches you some dancing. The bear shows dancing moves like heel turn, heel pull, dos-a-dos. Cameras keep track of the position of you and estimate how well you perform. So in terms of the concepts, the bear’s instructions are blocks of a limited life cycle, with labels, which are narrated to you. Limb positions are another set of blocks, which are characterized by a scalar value, ranged from 0 to 100%, and describing how close you were to a desired position. So, as the bear dances, you are told “heel turn, heel pull” and as you move, you hear how perfect your movement was. You may also add feedback factor here, which will serve to advise how your movements can improved, something like try leaning forward more next time etc.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*9aAqY4TouAqr9z__" /><figcaption>A Saturday night: VR game</figcaption></figure><p>You shouldn’t limit yourself by these examples. Thinking to start online music sheets store? You don’t want to leave anyone behind for sure. You will need to make music¸notes accessible, so assistive technology users can also enjoy it. Does your web app use chemistry ring formulas or math equations or anything else from any other subject you can think of? So, you can definitely describe all those symbols and formulas to your friend, so your friend can understand those, right? Thus, you should be able to express those in accessibility concepts as well. You may want to practice on your own to see how it goes.</p><h3>SHAPE THE IDEA</h3><p>Let’s come back to earth.</p><p>The accessibility concepts were invented years ago, and surprisingly or not, they do have a good fit to the modern web. However, web authors still struggle to make web apps accessible. How does it come, you wonder, what piece is missing? The answer is rather straightforward: web authors don’t have a rightful technology to describe web content, or in webish terms, there is no appropriate tag names and attributes or JavaScript hooks suitable to describe <em>meaning</em> of the content. So let’s say you have created a web app, which looks terrific and quite useful, but your creativity was not limited to standard HTML. Chances are the machine has no idea what the app is about, and thus cannot successfully transform it into a format the assistive technology can digest.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/0*b2L3uJA-_w0_AKFh" /><figcaption>A mechanism: connect content provides and users</figcaption></figure><p>In such wise a web facing technology to describe content is a crucial part, but even if you had it, the irony is the accessibility APIs couldn’t handle it anyways, because they weren’t designed to do so.</p><p>Let’s summarize this two piece puzzle. The first piece is that the web author needs a tech vocabulary to describe content meaning, and the second one, platforms should have a delivery mechanism to convey these descriptions to the assistive technologies.</p><p>So how would a technology that cracks this puzzle, look like?</p><p>Let’s think aloud. Say, you have a cool web app or you are provider of some unique content, and for sure it is special to fit into standard HTML. Then, of course you are able to explain it to your friends, so they understand the whole thing — sure, how they could possibly use your app otherwise? Then, you put the wording into the accessibility concepts same as above, which also means breaking it down into blocks in a way they can be processed by software.</p><p>The whole point in blocks building is that web authors need deep control over them — and this is exactly what today’s web is missing. Certainly, if authors create content, then their expressive possibilities to <em>explain</em> the content should not be restricted by technologies.</p><p>Let’s imagine that web authors get a true flexible tool that allows them to juggle content blocks the way they want, when the author is in charge to define new roles, states, properties and to connect the blocks to each other by any relation you can imagine. Then, all of this is piped to the assistive technologies via platform APIs (yep, it’s probably the sole requirement to APIs to being able to transfer this kind of data). And voilà, it’s all we need!</p><p>The content provider (a web author) knows the meaning of the content. The assistive technologies know how to present the content to the user, and the browser knows how to deliver it to the assistive technology.</p><p>In this scenario the browser serves as a bridge connecting the web author and the assistive technologies. The browser doesn’t necessarily have to understand the content semantics that flows through it. The browser acts as an agent that makes the author and the assistive technology to communicate directly.</p><p>Sounds amazing? Yes! Sounds fanciful? No, but the browsers and APIs vendors should work hard to make it happen.</p><p>I believe the web needs a new flexible and extensible technology to give web authors control over content they create. And it will change the web for good!</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=b4bf16ef5754" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/building-web-accessibility-in-2019-b4bf16ef5754">Building web accessibility in 2019</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[A crash course on optimizing your Docker images for production]]></title>
            <link>https://blog.sourcerer.io/a-crash-course-on-optimizing-your-docker-images-for-production-46f175fdffa8?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/46f175fdffa8</guid>
            <category><![CDATA[startup]]></category>
            <category><![CDATA[software-development]]></category>
            <category><![CDATA[web-development]]></category>
            <category><![CDATA[devops]]></category>
            <category><![CDATA[docker]]></category>
            <dc:creator><![CDATA[Adnan Rahić]]></dc:creator>
            <pubDate>Thu, 18 Oct 2018 19:02:23 GMT</pubDate>
            <atom:updated>2018-10-18T19:02:22.468Z</atom:updated>
            <content:encoded><![CDATA[<figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*Iruw9M5H-MmWOP4Nwvyz_w.jpeg" /><figcaption>[Today’s random Sourcerer profile: <a href="https://sourcerer.io/fabiospampinato?utm_source=medium&amp;utm_medium=profilelink">https://sourcerer.io/fabiospampinato</a>]</figcaption></figure><p>Don’t you hate it when deploying your app takes ages? Over a gigabyte for a single container image isn’t really what is viewed as best practice. Pushing billions of bytes around every time you deploy a new version doesn’t sound quite right for me.</p><h3>TL;DR</h3><p>This article will show you a few simple steps of how you can optimize your Docker images, making them smaller, faster and better suited for production.</p><p>The goal is to show you the size and performance difference between using default Node.js images and their optimized counterparts. Here’s the agenda.</p><ul><li>Why Node.js?</li><li>Using the default Node.js image</li><li>Using the Node.js Alpine image</li><li>Excluding development dependencies</li><li>Using the base Alpine image</li><li>Using the builder pattern</li></ul><p>Let’s jump in.</p><h3>Why Node.js?</h3><p>Node.js is currently the most versatile and beginner friendly environment to get started on the back end, and I write it as my primary language, so you’ll have to put up with it. Sue me, right. 😙</p><p>As an interpreted language, JavaScript doesn’t have a compiled target, like Go for example. There’s not much you can do to strip the size of your Node.js images. Or is there?</p><p>I’m here to prove that to be wrong. Picking the right base image for the job, only installing production dependencies for your production image, and of course, using the builder pattern are all ways you can drastically cut down the weight of your images.</p><p>In the examples below, I used a simple <a href="https://github.com/adnanrahic/boilerplate-api">Node.js API</a> I wrote a while back.</p><h3>Using the default Node.js image</h3><p>Starting out, of course, I used the default Node.js image pulling it from <a href="https://hub.docker.com/">the Docker hub</a>. Oh, how clueless I was.</p><pre>FROM node<br>WORKDIR /usr/src/app<br>COPY package.json package-lock.json ./<br>RUN npm install<br>COPY . .<br>EXPOSE 3000<br>CMD [ &quot;node&quot;, &quot;app.js&quot; ]</pre><p>Want to guess the size? My jaw dropped. <strong>727MB</strong> for a simple API!?</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*3Gvi08t7bP-agD26yk2VLQ.png" /></figure><p>Don’t do this, please. You don’t need to do this, honestly, just don’t.</p><h3>Using the Node.js Alpine image</h3><p>The easiest and quickest way to drastically cut down the image size is by choosing a much smaller base image. <a href="https://alpinelinux.org/">Alpine</a> is a tiny Linux distro that does the job. Just by choosing the Alpine version of the Node.js will show a huge improvement.</p><pre>FROM node:alpine <strong># adding the alpine tag</strong><br>WORKDIR /usr/src/app<br>COPY package.json package-lock.json ./<br>RUN npm install<br>COPY . .<br>EXPOSE 3000<br>CMD [ &quot;node&quot;, &quot;app.js&quot; ]</pre><p>A whole of six times smaller! Down to <strong>123.1MB</strong>. That’s more like it.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*hLJYuKXsRHMdCtKUhk14nA.png" /></figure><h3>Excluding development dependencies</h3><p>Hmm… But there has to be something else we can do. Well, we are installing all dependencies, even though we only need production dependencies for the final image. How about we change that?</p><pre>FROM node:alpine<br>WORKDIR /usr/src/app<br>COPY package.json package-lock.json ./<br>RUN npm install --production <strong># Only install prod deps</strong><br>COPY . .<br>EXPOSE 3000<br>CMD [ &quot;node&quot;, &quot;app.js&quot; ]</pre><p>There we go. We shaved another 30MB off! Down to <strong>91.6MB</strong> now. We’re getting somewhere.</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*SktWUx5Y1y8O4PirwqSvFw.png" /></figure><p>This had me quite proud of myself, and I was ready to call it a day. But then it hit me. What if I start with the raw Alpine image? Maybe it would be smaller if I grab the base Alpine image and install Node.js myself. I was right!</p><h3>Using the base Alpine image</h3><p>You’d think a move like this one would make little to no difference, but it shaved another 20MB off of the previous version.</p><pre>FROM alpine <strong># base alpine</strong><br>WORKDIR /usr/src/app<br>RUN apk add --no-cache --update nodejs nodejs-npm <strong># install Node.js</strong> and npm<br>COPY package.json package-lock.json ./<br>RUN npm install --production<br>COPY . .<br>EXPOSE 3000<br>CMD [ &quot;node&quot;, &quot;app.js&quot; ]</pre><p>Down to <strong>70.4MB</strong> now. That’s a whopping 10 times smaller than where we started!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*31zcUa1Qi34tnV-_3OMTVQ.png" /></figure><p>Not much more we can do now, right? Right…?</p><h3>Using the builder pattern</h3><p>Well, actually, there is. Let’s talk a bit about layers.</p><p>Every Docker image is built from layers. Each layer is a command in the Dockerfile. Here’s the file from above:</p><pre>FROM alpine # base alpine<br>WORKDIR /usr/src/app<br>RUN apk add --no-cache --update nodejs nodejs-npm # install Node.js and npm<br>COPY package.json package-lock.json ./<br>RUN npm install --production<br>COPY . .<br>EXPOSE 3000<br>CMD [ &quot;node&quot;, &quot;app.js&quot; ]</pre><p>The FROM instruction creates a layer, so does the WORKDIR, as well as RUN, etc. All the layers are read-only, except for the last one, the CMD, which is a writable layer. Read-only layers can be shared between containers, meaning one image can be shared between containers.</p><p>What’s going on here is that Docker uses storage drivers to manage read-only layers and the writable container layer. This is the ephemeral layer that gets deleted once a container is deleted. Really cool stuff. But why is this important?</p><p>By minimizing the number of layers, we can have smaller images. This is where the builder pattern steps in.</p><pre>FROM alpine<strong> AS builder</strong><br>WORKDIR /usr/src/app<br>RUN apk add --no-cache --update nodejs nodejs-npm<br>COPY package.json package-lock.json ./<br>RUN npm install --production<br>​<br>#<br>​<br>FROM alpine<br>WORKDIR /usr/src/app<br>RUN apk add --no-cache --update nodejs<br>COPY <strong>--from=builder</strong> /usr/src/app/node_modules ./node_modules<br>COPY . .<br>EXPOSE 3000<br>CMD [ &quot;node&quot;, &quot;app.js&quot; ]</pre><p>We’re using the first image only to install the dependencies, then in our final image, we copy over all node_modules without building or installing anything. We can even skip installing <strong>npm</strong> in the final image as well!</p><p>Want to guess the final size? Go ahead!</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/1024/1*W-3Dyv-HSeBvZVoXfoySFw.png" /></figure><p>I’d say we’ve done good, getting it down to <strong>48.6MB</strong>, which is a<strong> 15x </strong>improvement, is something to be proud of.</p><p>The engineering team over at <a href="https://sourcerer.io/">Sourcerer</a> is using this approach as well. They’re leveraging the builder pattern and creating several steps which include running tests, routine automation and bundling the app core for production.</p><p>The whole CI/CD process is tied together with <a href="https://jenkins.io/">Jenkins</a> to automatically deliver new versions to production and development environments from specific Git branches.</p><h3>The verdict</h3><p>Don’t be naive, there’s absolutely no reason to have gigabyte-sized images in production. A great first step is to use a tiny base image. Start small, baby steps are fine.</p><p>By choosing optimized base images will get you a long way. If you really need the boost in deployment speed and are plagued with slow CI/CD pipelines, check out the <a href="https://docs.docker.com/develop/develop-images/multistage-build/">builder pattern</a>. You won’t want to do it any other way in the future.</p><p><strong><em>Note</em></strong><em>: I did leave out a sample where development dependencies are included for running tests before deploying to production, as it wasn’t relevant to the final size reduction for running in production. Of course, it’s a valid use-case! Feel free to add your ideas in the comments below. I’d love to hear what you think!</em></p><p>If you want to check out any of my previous DevOps related articles about Docker and Kubernetes, feel free to head over to <a href="https://blog.sourcerer.io/@adnanrahic">my profile</a>.</p><p><a href="https://blog.sourcerer.io/@adnanrahic">Adnan Rahić - Sourcerer Blog</a></p><p><em>Hope you guys and girls enjoyed reading this as much as I enjoyed writing it.</em> <br><em>Do you think this tutorial will be of help to someone? Do not hesitate to share. If you liked it, smash the </em><strong><em>clap</em></strong><em> below so other people will see this here on Medium. Don’t forget to show us some love by </em><strong><em>following the Sourcerer blog!</em></strong></p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=46f175fdffa8" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/a-crash-course-on-optimizing-your-docker-images-for-production-46f175fdffa8">A crash course on optimizing your Docker images for production</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
        <item>
            <title><![CDATA[Featured GitHub Repository — MS-DOS]]></title>
            <link>https://blog.sourcerer.io/featured-github-repository-ms-dos-5ecf27cacb38?source=rss----b33180f5facf---4</link>
            <guid isPermaLink="false">https://medium.com/p/5ecf27cacb38</guid>
            <category><![CDATA[programming]]></category>
            <category><![CDATA[open-source]]></category>
            <category><![CDATA[operating-systems]]></category>
            <category><![CDATA[microsoft]]></category>
            <category><![CDATA[coding]]></category>
            <dc:creator><![CDATA[Robert W. Oliver II]]></dc:creator>
            <pubDate>Sun, 14 Oct 2018 23:36:53 GMT</pubDate>
            <atom:updated>2018-10-15T00:16:54.075Z</atom:updated>
            <content:encoded><![CDATA[<h3>Walking through MS-DOS the latest featured repository on GitHub</h3><p>by Robert W. Oliver II</p><figure><img alt="" src="https://cdn-images-1.medium.com/max/600/0*1v9oU4dmL4q3uFHJ.png" /><figcaption>This week’s featured GitHub repository is MS-DOS, published by Microsoft in 1981</figcaption></figure><h3>Pure Retro Joy</h3><p>When I learned that <a href="https://github.com/Microsoft/MS-DOS/">Microsoft’s MS-DOS source</a> code was one of the most popular repositories on GitHub, I was nothing short of ecstatic.</p><p>If you’ve followed any of my blog posts here, you’ll know I’m quite a fan of retro computing. I’ve covered various topics like <a href="https://blog.sourcerer.io/640k-really-is-enough-for-anyone-314f393ca5b8">MS-DOS video game development</a> and <a href="https://blog.sourcerer.io/retrokern-a-modern-thin-retro-assembly-platform-d51d8b89b6c1">put together a simple 16-bit operating system named Retrokern</a>. When talking with other developers who are retro-enthusiasts, we often speak with passion in the challenge of creating fully-functional software and games in the most confining of spaces. It was this intense scarcity of hardware resources that encouraged ingenuity and spawned an entire golden era of PC computing.</p><p>In 2014, Microsoft released the source code to MS-DOS 1.25 and 2.0. These versions released in 1981 and 1983 respectively, were licensed to both IBM (branded as PC-DOS) and IBM-PC clone manufacturers and OEM (original equipment manufacturers). In the version 1.x and 2.x era, releases of DOS were tailored to the manufacturer due to dramatic variations in BIOS compatibility. Thanks to the IO.SYS (sometimes named IBMBIO.COM) hardware abstraction, Microsoft needed to only modify this relatively thin layer to allow their operating system to execute on a wide variety of 8086-based machines.</p><h3>Exploring the Source</h3><p>In our trip down 640k memory lane, we’ll be exploring the 2.0 version. Not only is it the most recent source listed, but it also bears a stronger resemblance to later DOS versions.</p><p>Before we get into specific segments, let’s discuss the programming language of MS-DOS 2.0 — x86 assembly. Nearly all systems level and game programming in the early DOS days was done in assembly. For an operating system, some assembly is required, especially in the boot loader. Since CPU cycles and bytes of memory were incredibly expensive in the 80’s, the entire MS-DOS kernel and associated utilities were written in assembly.</p><h3>DOS Boot</h3><p>When a BIOS-enabled PC (pretty much any PC made in the 80’s, 90’s, and 2000’s) boots, the BIOS runs various self-checks and sets up its own interrupt vectors (function call tables for software interrupts), then loads the first sector of the boot drive and transfers control to it. The operating system has a very small window of code available to load the rest of the system from disk and transfer execution to it. This is handled primarily in the SYSINIT.ASM file.</p><p>In the early days of DOS, this feat was not overly challenging, but as hardware and filesystems became more complex, it was clear the PC world needed a better boot loader. Thus, the UEFI (universal extensible firmware interface) standard was born, used by both Macs and PCs. Despite its support for huge hard drives, x86_64 processors, and hardware never dreamed of in the 1980’s, the UEFI system includes a legacy BIOS layer emulating the same bootstrapping behavior that an operating system like MS-DOS 2.0 would expect.</p><h3>The MS-DOS Kernel</h3><p>The hardware abstraction and input/output layer, called IO.SYS, pared with the MS-DOS kernel, MSDOS.SYS, make up the core of the system. Many of the files in the source tree combine to produce these two files. These files provide various routines that are available via interrupts, namely int 0x21 (or referred to as 21h, the “h” for hex).</p><p>This software interrupt allows DOS programs to allocate memory in a “safe” way (I use quotes because DOS memory allocation was far from perfect), access the filesystem, and display text on the screen. These function calls are far more portable than BIOS calls since BIOS specifications were rarely entirely compatible. DOS kernel functions provided a safe way to accomplish most of a program’s needs.</p><p>When speed was critical, however, programs would often ignore DOS and use the hardware directly. This was especially true in the case of text display. Command line utilities were fine with the 0x09 int 0x21 call to print text (terminated by a $ sign), but complex applications or games would often write to the screen directly at 0xB800.</p><h3>The Command Interpreter</h3><p>COMMAND.ASM provides the source code for the part of DOS most users interacted with — the command interpreter. It was often known by its binary name, COMMAND.COM. Once the kernel was loaded, the command interpreter would be started. The program was unique in that it had three unique parts: the init, transient, and resident sections.</p><p>The init portion of COMMAND.COM loaded the rest of the portions, processed the AUTOEXEC.BAT file (a script launched at each boot with startup commands), then transferred control to the transient portion.</p><p>The transient portion handled the user input loop. It would display the command prompt, wait for user input, then process those commands. The transient portion remained in memory during program execution but was considered volatile. Programs that used considerable amounts of memory would overwrite this section. It wasn’t needed during program execution, so this was perfectly acceptable.</p><p>The resident portion of was always present. It included the code necessary for starting and terminating programs as well as handling their exceptions (including the CTRL+C user-generated exception). It also included code that would reload the transient portion from the COMMAND.COM file.</p><p>In later DOS versions, the command interpreter could be substituted with another program. Popular alternatives were 4DOS, released by JP Software, and NDOS by Norton. These alternatives provided additional functionality and extra quality-of-life functions for the DOS user.</p><h3>DOS Utilities</h3><p>MS-DOS was more than just a boot loader and command interpreter — it shipped with various utilities for formatting disks, managing files, and debugging software. Let’s explore several interesting programs that were included in MS-DOS 2.0</p><h3>COPY</h3><p>Interesting enough, the copy utility, used for copying files from one location to another, wasn’t a separate program but rather part of the transient portion of the command interpreter. If a program wanted to copy a file, it either had to do it by itself or spawn another command interpreter and run its copy command for that purpose.</p><p>The source code for copy can be found in v2.0/source/COPY.ASM. This is included in the larger COMMAND.ASM file. As with most assembly code, the various routines inside the copy function are split apart by labels (specified by a name with a proceeding colon). These labels were translated into JMP (jump) addresses by the assembler.</p><p>An interesting artifact lies on line 120:</p><pre>mov [MELCOPY],al ; Not a Mel Hallerman copy</pre><p>Mel Hallerman was an IBM employee who is credited with writing some of the utilities included in MS-DOS 2.0. Unfortunately, I could find no code documentation or external reference source that provided a clue as to why the routine was given this name.</p><h3>DIR</h3><p>The file allocation table (FAT) was a directory that existed on the disk to point to the absolute locations of various files and subdirectories. Rather than browse this binary directory, the “dir” utility embedded in COMMAND.COM interprets this and displays it in an easy-to-digest fashion to the user.</p><p>The source code for this command can be found in DIR.ASM. Like COPY.ASM, it is compiled into the COMMAND.ASM file for inclusion into the transient portion of the command interpreter.</p><h3>CHKDSK</h3><p>MS-DOS 2.0 introduced directories. Files no longer had to reside in the root directory, and users could create a file and folder system to suit their needs. While this was a great achievement for Microsoft, this directory system is not perfect. When it failed, and it sometimes did during power outages and hard lockups, CHKDSK was usually the first rescue tool deployed.</p><p>The code for CHKDSK is, appropriately enough, found in CHKDSK.ASM. It’s an impressive piece of code that can rescue data in many cases of a corrupted FAT.</p><p>Oddly enough, until DOS 3.x, the DIR command didn’t show the user how much free disk space remained on the disk. The CHKDSK utility was the most common way to retrieve this information.</p><h3>EDLIN</h3><p>MS-DOS 5 and later included EDIT, a menu-driven text-based editor that was simple to use. But before this, MS-DOS users often relied on EDLIN.</p><p>Edlin was a line editor written by Tim Patterson for 86-DOS, the precursor to MS-DOS 1.0. I specify line editor because rather than accept free-form text like most other editors, the user could only input one line at a time. To display already written text, the user could use the “L” command, or prefix this with numbers to indicate the lines to display. Inserting text among what was already entered was a chore — users needed to relist the text and use the “I” command (preceded by the line number) to begin inserting text.</p><p>While Edlin wasn’t known for its intuitive interface, it was quite powerful in certain circumstances, providing a great search and replace feature and enabling users to delete multiple lines with just a few keystrokes.</p><h3>DEBUG</h3><p>Unless you purchased an assembler or compiler, writing software for DOS was difficult. Since no significant development utilities were included in the base package, amateur programmers were faced with a chicken-or-the-egg problem. Without a way to encode assembly commands into the binary code that the x86 processor uses, they were stuck with writing batch files in EDLIN.</p><p>I personally experienced this on my first DOS computer. I was a teenager with no job, so shelling out money for development software was not an option. Until I could acquire better tools, I often wrote assembly language programs in DEBUG.</p><p>The process of entering programs into DEBUG was tedious, but, true to its name, it allowed for instant debugging of said code. DEBUG was only able to write .COM files, meaning my programs couldn’t be larger than 64k, but that was more than enough to suit my programming needs until I later acquired more professional tools.</p><p>DEBUG was also excellent for disassembling and patching code. If the file was a .COM program, you could patch problematic code and write the file back to disk, providing a primitive yet effective hex editor.</p><p>The source code for DEBUG.COM can be found in DEBUG.ASM.</p><h3>Further Exploration</h3><p>The code is adequately documented, but for some commands, like FORMAT, a text file is provided with an in-depth discussion of the program’s operation and source code structure. I found these files quite useful in my exploration of this ancient treasure.</p><h3>The Future of DOS</h3><p>Believe it or not, <a href="https://www.kickstarter.com/projects/1973096722/planet-x3-for-ms-dos">new DOS software is still being written</a>. And across the interwebs you’ll find reports of DOS computers still being used in various commercial applications. <a href="http://www.slate.com/blogs/future_tense/2014/05/14/george_r_r_martin_writes_on_dos_based_wordstar_4_0_software_from_the_1980s.html">George R.R. Martin uses WordStar</a>, a DOS-based word processor, for his Game of Thrones book series. YouTuber and Linux personality Brian Lunduke <a href="https://www.youtube.com/watch?v=lSMpGkmj6MY">underwent a 30-day 1989 computing challenge</a> and, last I heard, still uses a DOS spreadsheet program. A good friend of mine uses the distraction-free environment of DOS to get some serious work done.</p><p>I won’t pretend that DOS is the operating system of the future. That would be, at best, incredibly naïve. But the fact that this nearly forty-year-old operating system is still kicking in various incarnations, including the popular <a href="http://freedos.org/">FreeDOS clone</a>, is astounding.</p><p>I encourage you to marinate on this thought for a moment — we still boot computers and write software for an operating system written nearly four decades ago when Jimmy Carter was president.</p><img src="https://medium.com/_/stat?event=post.clientViewed&referrerSource=full_rss&postId=5ecf27cacb38" width="1" height="1" alt=""><hr><p><a href="https://blog.sourcerer.io/featured-github-repository-ms-dos-5ecf27cacb38">Featured GitHub Repository — MS-DOS</a> was originally published in <a href="https://blog.sourcerer.io">Sourcerer Blog</a> on Medium, where people are continuing the conversation by highlighting and responding to this story.</p>]]></content:encoded>
        </item>
    </channel>
</rss>