<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0">
    <channel xmlns:g="http://base.google.com/ns/1.0">
        <title>RichardSwinbank.net</title>
        <link>https://richardswinbank.net/</link>
        <description/>
        

        <item>
            <title>com.microsoft.cdm for Synapse Spark v3.4</title>
            <pubDate>Sun 04 Apr 2025 16:15:00 +0000</pubDate>
            <link>https://richardswinbank.net/synapse/com_microsoft_cdm_for_synapse_spark</link>
            <description><![CDATA[The Spark connector for Synapse Link for Dataverse quietly disappeared in Synapse Spark runtime v3.4. This article looks at a possible workaround.]]></description>
            <category>synapse</category>                
            <category>spark</category>                
            <category>cdm</category>                
            <category>dataverse</category>                
        </item>  
        
        <item>
            <title>Deploy TMDL semantic models to Power BI</title>
            <pubDate>Fri 28 Mar 2025 18:00:00 +0000</pubDate>
            <link>https://richardswinbank.net/pbi/deploy_tmdl_semantic_models_to_power_bi</link>
            <description><![CDATA[Deploy TMDL semantic models to Power BI using Azure DevOps CI/CD pipelines!]]></description>
            <category>powerbi</category>                
            <category>tmdl</category>                
            <category>tmsl</category>                
            <category>devops</category>                
            <category>dataops</category>                
            <category>cicd</category>                
            <category>devex</category>                
            <category>blog</category>                
        </item>  
        
        <item>
            <title>Unreliable logging in data factory pipelines</title>
            <pubDate>Mon 03 Mar 2025 08:49:00 +0000</pubDate>
            <link>https://richardswinbank.net/fabric/unreliable_logging_in_data_factory_pipelines</link>
            <description>In this post I take a look at discrepancies between data factory activity execution details - in Fabric, Synapse or ADF - and what gets reported in activity logs.</description>
            <category>blog</category>                
            <category>fabric</category>                
            <category>synapse</category>                
            <category>adf</category>                
            <category>pipelines</category>                
        </item>  

        <item>
            <title>Two-aggregate pivot</title>
            <pubDate>Tue 10 Sep 2024 21:54:00 +0000</pubDate>
            <link>https://richardswinbank.net/spark/two-aggregate-pivot</link>
            <description><![CDATA[In September 2024's T-SQL Tuesday, Deepthi Goguri wants to hear about a recent technical issue you resolved. I recently ran into an issue with a client where a Spark Structured Streaming query I was developing just refused to run, and I'll talk about that here – thanks for hosting, Deepthi!]]></description>
            <category>blog</category>                
            <category>tsqltuesday</category>                
            <category>spark</category>                
            <category>streaming</category>                
            <category>databricks</category>                
            <category>synapse</category>                
            <category>fabric</category>                
        </item>  

        <item>
            <title>DevOps for SQL databases</title>
            <pubDate>Tue 13 Aug 2024 19:55:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/sql-database-devops</link>
            <description><![CDATA[In August 2024's T-SQL Tuesday, Mala Mahadevan asks us to reflect on how we manage database code. Thanks for hosting, Mala!]]></description>
            <category>blog</category>                
            <category>tsqltuesday</category>                
            <category>tsql</category>                
            <category>devops</category>                
            <category>dataops</category>                
        </item>  

        <item>
            <title>Create Power BI deployment pipelines automatically</title>
            <pubDate>Wed 17 Jul 2023 22:00:00 +0000</pubDate>
            <link>https://richardswinbank.net/pbi/create_power_bi_deployment_pipelines_automatically</link>
            <description><![CDATA[In this post, I accelerate creation of new reports by building their deployment pipelines automatically. Create your report, push it to Git, see it appear in Power BI!]]></description>
            <category>powerbi</category>                
            <category>devops</category>                
            <category>dataops</category>                
            <category>cicd</category>                
            <category>devex</category>                
            <category>blog</category>                
        </item>  

        <item>
            <title>Can you just...?</title>
            <pubDate>Wed 21 Jun 2023 15:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/can_you_just</link>
            <description><![CDATA[Communicating technical complexity to non-technical colleagues can be tough -- but it's essential if you want to explain why you're going to need 6 months to build a report 😂. This post tries to help with a one-slide explainer.]]></description>
            <category>blog</category>                
            <category>data_engineering</category>                
            <category>powerbi</category>                
        </item>  

        <item>
            <title>Working with multiple Power BI dataset environments</title>
            <pubDate>Tue 06 Jun 2023 09:37:00 +0000</pubDate>
            <link>https://richardswinbank.net/pbi/working_with_multiple_power_bi_dataset_environments</link>
            <description><![CDATA[In a recent article I talked about deploying Power BI reports through any number of environments (not just three, and automatically!). In this new post I look at how to do exactly the same thing for standalone/shared PBI datasets – and how to bind thin reports to different source datasets automatically during deployment.]]></description>
            <category>powerbi</category>                
            <category>tmsl</category>                
            <category>devops</category>                
            <category>dataops</category>                
            <category>cicd</category>                
            <category>devex</category>                
            <category>blog</category>                
        </item>  

        <item>
            <title>Reusable deployment pipelines for Power BI</title>
            <pubDate>Tue 23 May 2023 07:25:00 +0000</pubDate>
            <link>https://richardswinbank.net/pbi/reusable_deployment_pipelines_for_power_bi</link>
            <description><![CDATA[Azure DevOps pipeline templates allow you define common functionality once, then reuse it in many pipelines. A great use case for this is publishing #PowerBI reports automatically -- templated deployment makes it easy to create a pipeline per report, giving you low overhead deployment with fine-grained control.]]></description>
            <category>powerbi</category>                
            <category>devops</category>                
            <category>dataops</category>                
            <category>cicd</category>                
            <category>devex</category>                
            <category>blog</category>                
        </item>  

        <item>
            <title>Publish automatically to Power BI environments with Azure DevOps pipelines</title>
            <pubDate>Tue 09 May 2023 09:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/pbi/publish_to_power_bi_environments_with_azure_devops_pipelines</link>
            <description><![CDATA[This post shows you how to use Azure DevOps pipelines to automate publishing of Power BI reports to different environment workspaces.]]></description>
            <category>powerbi</category>                
            <category>devops</category>                
            <category>dataops</category>                
            <category>cicd</category>                
            <category>devex</category>                
            <category>blog</category>                
        </item>  

        <item>
            <title>Better version control for Power BI datasets</title>
            <pubDate>Wed 26 Apr 2023 08:32:00 +0000</pubDate>
            <link>https://richardswinbank.net/pbi/better_version_control_for_power_bi_datasets</link>
            <description><![CDATA[Version controlling a Power BI PBIX file along with all its data can be problematic, for reasons of both size and privacy. In this post I look at managing Power BI datasets separately, without data, and deploying them using Azure DevOps pipelines.]]></description>
            <category>powerbi</category>                
            <category>tmdl</category>                
            <category>devex</category>                
            <category>devops</category>                
            <category>dataops</category>                
            <category>cicd</category>                
            <category>blog</category>                
        </item>  

        <item>
            <title>Pro DevEx for Power BI</title>
            <pubDate>Wed 12 Apr 2023 08:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/pbi/pro_devex_for_powerbi</link>
            <description><![CDATA[After deploying an obsolete version of a buggy #PowerBI report to the wrong workspace, I realised it wasn't the report I meant anyway 😒. Time to invite #DevOps to this party.]]></description>
            <category>powerbi</category>                
            <category>devex</category>                
            <category>devops</category>                
            <category>dataops</category>                
            <category>cicd</category>                
            <category>blog</category>                
        </item>  
        
        <item>
            <title>Metadata-driven SQL code generation</title>
            <pubDate>Tue 11 Oct 2022 07:40:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/metadata-driven-sql-code-generation</link>
            <description><![CDATA[October 2022's T-SQL Tuesday is hosted by Steve Jones – thanks Steve 😊. Steve's invitation this month is to write about using dynamic SQL – that is, T-SQL statements built up from text strings and then executed as code.]]></description>
            <category>blog</category>                
            <category>tsqltuesday</category>                
            <category>tsql</category>                
            <category>dynamic</category>                
        </item>  

        <item>
            <title>Argument {0} is null or empty. Parameter name: paraKey</title>
            <pubDate>Tue 12 Jul 2022 20:04:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/script_activity_parakey</link>
            <description><![CDATA[I encountered this error while using Azure Data Factory's Script activity -- with a bit of luck, writing it down will help me remember how to fix it next time!]]></description>
            <category>adf</category>
            <category>blog</category>                
        </item>  
        
        <item>
            <title>SQL database project (SSDT) merge conflicts</title>
            <pubDate>Wed 09 Mar 2022 08:21:00 +0000</pubDate>
            <link>https://richardswinbank.net/ssdt/ssdt_merge_conflicts</link>
            <description><![CDATA[More merge conflict fun! This time, using pre-commit hooks to duck conflicts in SSDT .sqlproj files.]]></description>
            <category>sql</category>
            <category>ssdt</category>
            <category>git</category>
            <category>devops</category>
            <category>blog</category>                
        </item>  

        <item>
            <title>Merge conflicts in tabular models</title>
            <pubDate>Tue 01 Feb 2022 08:40:00 +0000</pubDate>
            <link>https://richardswinbank.net/ssas/ssas_tabular_merge_conflicts</link>
            <description><![CDATA[I sometimes find working with Visual Studio's projects a challenge in multi-developer environments, because each project type seems to have its own vulnerability to Git merge conflicts. This post looks at how to avoid them when working with tabular models for Power BI Premium, AAS or SSAS.]]></description>
            <category>ssas</category>
            <category>tabular</category>
            <category>git</category>
            <category>devops</category>
            <category>blog</category>                
        </item>  

        <item>
            <title>Ordered STRING_SPLIT</title>
            <pubDate>Tue 18 Jan 2022 08:34:00 +0000</pubDate>
            <link>https://richardswinbank.net/tsql/ordered_string_split</link>
            <description><![CDATA[I'm a bit late to this party, but some long-awaited news is finally here – STRING_SPLIT now returns the ordinal position of string elements in various Azure SQL offerings.]]></description>
            <category>blog</category>                
            <category>tsql</category>
        </item>  
        
        <item>
            <title>Breaking the rules</title>
            <pubDate>Tue 11 Jan 2022 23:34:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/breaking_the_rules</link>
            <description><![CDATA[January 2022's T-SQL Tuesday is hosted by Andy Yun -- thanks Andy! This month, Andy's asking about learning that changes your opinion.]]></description>
            <category>blog</category>                
            <category>tsql</category>
            <category>tsqltuesday</category>
        </item>  
        <item>
            <title>Infrastructure as nearly-all-code</title>
            <pubDate>Tue 14 Sep 2021 22:17:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/infrastructure_as_nearly_all_code</link>
            <description><![CDATA[T-SQL Tuesday #142 (September 2021) is hosted by Frank Geisler. Frank's choice of subject is "using descriptive techniques to build database environments".]]></description>
            <category>blog</category>                
            <category>tsqltuesday</category>
            <category>azure</category>
            <category>terraform</category>
        </item>  
        <item>
            <title>More Get Metadata in ADF</title>
            <pubDate>Tue 23 Feb 2021 20:00:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/get_more_metadata</link>
            <description><![CDATA[Last year I wrote a post about doing this in pure ADF, with really terrible performance. By way of apology, I've had another go in an Azure Function!]]></description>
            <category>blog</category>                
            <category>adf</category>
            <category>metadata</category>
        </item>  
        <item>
            <title>Azure Data Factory, the ADF UX and Git</title>
            <pubDate>Tue 10 Nov 2020 08:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/azure_data_factory_the_adf_ux_and_git</link>
            <description><![CDATA[If you're using Azure Data Factory (ADF), you're probably using Git and almost certainly using the ADF User Experience (ADF UX) – ADF's online integrated development environment (IDE). These three components are so closely interlinked that sometimes it's hard to think about them separately – in this article I try to do exactly that.]]></description>
            <category>blog</category>                
            <category>adf</category>
            <category>git</category>
            <category>adfux</category>
        </item>  

        <item>
            <title>Catch-22: Automating MSI access to an Azure SQL Database</title>
            <pubDate>Thu 28 Oct 2020 08:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/azure/catch_22_automating_msi_access_to_azure_sql_database</link>
            <description><![CDATA[A problem I ran into recently is how to automate granting access to SQL databases in different environments. I'm using Terraform to build parallel data engineering environments which – amongst other things – include one or more SQL databases and instances of Azure Data Factory (ADF). ADF pipelines need access to databases, and I want to authenticate linked service connections using the factory's managed identity (MSI). Building this automatically is harder than it sounds!]]></description>
            <category>blog</category>                
            <category>terraform</category>
            <category>adf</category>
            <category>msi</category>
            <category>azure</category>
            <category>devops</category>
        </item> 

        <item>
            <title>The Ice Cream Van of Abstraction</title>
            <pubDate>Tue 13 Oct 2020 08:35:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/the_ice_cream_van_of_abstraction</link>
            <description><![CDATA[October 2020's <a href="http://tsqltuesday.com/">T-SQL Tuesday</a> is hosted by Rob Volk (<a href="https://sqlrblog.wordpress.com/">b</a>|<a href="https://twitter.com/sql_r">t</a>) with the subject <em>Data Analogies, or: Explain Databases Like I’m Five!</em> Thanks for hosting, Rob! It turns out that I don't do much specifically <em>databas</em>plaining™ by analogy, but there's one analogy I go back to again and again when talking to people unfamiliar with a useful concept, both inside and outside of SQL Server: <em>abstraction</em>.]]></description>
            <category>blog</category>                
            <category>tsqltuesday</category>
        </item> 

        <item>
            <title>Google Analytics API pagination in Azure Data Factory</title>
            <pubDate>Wed 07 Oct 2020 08:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/google_analytics_api_pagination_in_azure_data_factory</link>
            <description><![CDATA[In a previous post I created a pipeline to retrieve data from the Google Analytics reporting API, using an OAuth 2.0 access token for authorisation. Azure Data Factory's Copy data activity handles various styles of paged API response, but it doesn't support the approach taken by the Google Analytics reporting API. In this post I look at how to make that work.]]></description>
            <category>blog</category>                
            <category>google_analytics</category>                
            <category>adf</category>
        </item>

        <item>
            <title>Get Metadata recursively in Azure Data Factory</title>
            <pubDate>Tue 29 Sep 2020 08:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/get_metadata_recursively_in_azure_data_factory</link>
            <description><![CDATA[Azure Data Factory's Get Metadata activity returns metadata properties for a specified dataset, but not recursively. In this post I try to build an alternative using just ADF.]]></description>
            <category>blog</category>                
            <category>metadata</category>                
            <category>adf</category>
        </item>

        <item>
            <title>Extract data from Google Analytics with Azure Data Factory</title>
            <pubDate>Thu 10 Sep 2020 09:10:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/extract_data_from_google_analytics_with_azure_data_factory</link>
            <description><![CDATA[In a <a href="https://richardswinbank.net/adf/access_google_analytics_with_azure_data_factory">previous post</a> I prepared an ADF pipeline to make authorised Google Analytics API requests, using an Azure Function to obtain an OAuth 2.0 access token. In this article I'm going to use the returned token to authorise an API connection and extract data from Google Analytics.]]></description>
            <category>blog</category>                
            <category>google_analytics</category>                
            <category>adf</category>
        </item>          

        <item>
            <title>Automate to replicate</title>
            <pubDate>Tue 08 Sep 2020 13:55:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/automate_to_replicate</link>
            <description><![CDATA[
In September 2020's <a href="http://tsqltuesday.com/">T-SQL Tuesday</a>, Elizabeth Noble (<a href="https://sqlzelda.wordpress.com/">b</a>|<a href="https://twitter.com/sqlzelda">t</a>) wants to know what members of the SQL community have automated to make their lives easier. Thanks for asking, Elizabeth 😀.
]]></description>
            <category>blog</category>                
            <category>dynamic</category>                
            <category>tsqltuesday</category>
        </item>  
        
        <item>
            <title>Access Google Analytics with Azure Data Factory</title>
            <pubDate>Fri 28 Aug 2020 08:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/access_google_analytics_with_azure_data_factory</link>
            <description><![CDATA[At the time of writing, Azure Data Factory has no connector to enable data extraction from Google Analytics, but it seems to be a <a href="https://feedback.azure.com/forums/270578-data-factory/suggestions/36151204-azure-data-factory-google-analytics-connector">common requirement</a> – here's how to do it using ADF's current feature set.]]></description>
            <category>blog</category>                
            <category>google_analytics</category>                
            <category>adf</category>
        </item>  
        
        <item>
            <title>Parameterising the Execute Pipeline activity</title>
            <pubDate>Thu 13 Aug 2020 08:45:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/parameterising_the_execute_pipeline_activity</link>
            <description><![CDATA[A shortcoming of Azure Data Factory's Execute Pipeline activity is that the pipeline to be triggered must be hard-coded into the activity – so it's impossible to use metadata-driven approaches like iterating over a list of pipeline names. This post looks at an alternative approach.]]></description>
            <category>blog</category>                
            <category>adf</category>
        </item>  
        
        <item>
            <title>Time capsule</title>
            <pubDate>Tue 11 Aug 2020 11:00:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/time_capsule</link>
            <description><![CDATA[<p>
<a href="http://tsqltuesday.com/">T-SQL Tuesday</a> for August 2020 is hosted by Tamera Clark (<a href="https://clarkcreations.net/blog/t-sql-tuesday/">b</a>|<a href="https://twitter.com/tameraclark">t</a>). She's asking for help in assembling a #SQLCommunity time capsule -- thanks for hosting, Tamera!
</p>]]></description>
            <category>blog</category>                
            <category>tsqltuesday</category>
        </item>          

        <item>
            <title>Why automate ADF pipeline testing?</title>
            <pubDate>Fri 24 Jul 2020 10:00:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/why_automate_adf_pipeline_testing</link>
            <description><![CDATA[<p>
Since writing my <a href="https://richardswinbank.net/adf/set_up_automated_testing_for_azure_data_factory">series</a> on automated testing Azure Data Factory pipelines, I've had a few questions along the lines of "why bother?". One reader commented that "a typical ADF developer tests their pipeline doing debug runs". This is exactly how I develop pipelines: make a change, run the change, repeat until the pipeline does what I want. "Why bother with more testing?" is a good question!
</p>]]></description>
            <category>blog</category>                
            <category>adf</category>
            <category>adf_testing</category>
            <category>devops</category>
        </item>  

        <item>
            <title>Default fault</title>
            <pubDate>Tue 14 Jul 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/admin/default_fault</link>
            <description><![CDATA[<p>
July 2020's <a href="http://tsqltuesday.com/">T-SQL Tuesday</a> is hosted by Kerry Tyler (<a href="https://www.airbornegeek.com/2020/07/t-sql-tuesday-128-learn-from-others/">b</a>|<a href="https://twitter.com/AirborneGeek">t</a>). Kerry describes pilots' use of plane crash reports to learn aviation safety lessons, and asks for similar – albeit hopefully lass catastrophic – tales of SQL Server-related disaster.
</p>]]></description>
            <category>blog</category>                
            <category>tsqltuesday</category>
            <category>sql</category>
            <category>ssms</category>
            <category>admin</category>
        </item>  

        <item>
            <title>Dropping temporary tables</title>
            <pubDate>Thu 09 Jul 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/tsql/dropping_temporary_tables</link>
            <description><![CDATA[<p>
Local <a href="https://docs.microsoft.com/en-us/sql/t-sql/statements/create-table-transact-sql#temporary-tables">temporary tables</a> – tables with names that begin with a single <code>#</code> character – are dropped automatically by SQL Server when they are no longer in scope. So why drop them explicitly at all? Here are some ideas!
</p>]]></description>
            <category>tsql</category>
            <category>dynamic</category>
            <category>development</category>
            <category>blog</category>                
        </item>  

        <item>
            <title>Print big</title>
            <pubDate>Tue 30 Jun 2020 15:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/tsql/print_big</link>
            <description><![CDATA[<p>
A feature of T-SQL is that strings longer than 8000 bytes <a href="https://docs.microsoft.com/en-us/sql/t-sql/language-elements/print-transact-sql">are truncated</a> by <code>PRINT</code>. If you haven't already discovered this, you might wonder why it's a problem – the answer (for me at least) is <em>dynamic SQL</em>.
</p>]]></description>
            <category>tsql</category>
            <category>blog</category>                
        </item>  

        <item>
            <title>From Azure Pipelines to GitHub Actions</title>
            <pubDate>Thu 25 Jun 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/github/from_azure_pipelines_to_github_actions</link>
            <description><![CDATA[<p>
I use <a href="https://azure.microsoft.com/en-gb/services/devops/pipelines/">Azure Pipelines</a> a lot, both for CI/CD (including <a href="https://richardswinbank.net/adf/testing_azure_data_factory_in_your_cicd_pipeline">automated testing</a> for <a href="https://azure.microsoft.com/en-gb/services/data-factory/">Azure Data Factory</a>) and for building disposable Azure environments repeatably. Most of my code, however, lives in <a href="https://github.com/richardswinbank">GitHub</a>. GitHub provides its own service for automating CI/CD software workflows – <a href="https://help.github.com/en/actions/getting-started-with-github-actions/about-github-actions">GitHub Actions</a> – and in this post I compare it to Azure Pipelines.
</p>]]></description>
            <category>cicd</category>
            <category>devops</category>
            <category>azure</category>
            <category>github</category>
            <category>blog</category>                
        </item>  

        <item>
            <title>Calculating Azure Data Factory test coverage</title>
            <pubDate>Thu 17 Jun 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/calculating_azure_data_factory_test_coverage</link>
            <description><![CDATA[<p>This is the sixth and final article in a <a href="https://richardswinbank.net/adf/set_up_automated_testing_for_azure_data_factory">series</a> on automated testing for Azure Data Factory pipelines. In software engineering, <strong><a href="https://en.wikipedia.org/wiki/Code_coverage">code coverage</a></strong> (or <strong>test coverage</strong>) measures the proportion of a program's source code executed by a given test suite. In this article I use execution history data to measure the proportion of a data factory's activities (across all pipelines) executed during a full test run – the test suite's <strong>activity coverage</strong>.
</p>]]></description>
            <category>azure</category>
            <category>adf</category>
            <category>testing</category>
            <category>devops</category>
            <category>blog</category>         
        </item>        

        <item>
            <title>The under-appreciated Start → Run</title>
            <pubDate>Tue 09 Jun 2020 13:00:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/the_underappreciated_start_run</link>
            <description><![CDATA[<p>
This month's <a href="http://tsqltuesday.com/">T-SQL Tuesday</a>, hosted by Kenneth Fisher (<a href="https://sqlstudies.com/2020/06/02/tsql-tuesday-127-invite-non-sql-tips-and-tricks/">b</a>|<a href="https://twitter.com/sqlstudent144">t</a>), asks for specifically <em>non-SQL</em> related tips and tricks. I found that much harder than any number of technical problems, which I guess is the point of T-SQL Tuesday... 
</p>]]></description>
            <category>blog</category>
            <category>tsqltuesday</category>
        </item>

        <item>
            <title>Unit testing Azure Data Factory pipelines</title>
            <pubDate>Wed 03 Jun 2020 11:00:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/unit_testing_azure_data_factory_pipelines</link>
            <description><![CDATA[<p>
In <a href="https://richardswinbank.net/adf/isolated_functional_tests_for_azure_data_factory">part three</a> of this series I looked at functional tests for ADF pipelines: verifying, in isolation, that pipelines are “doing things right”. In this post I'll be testing isolated pipelines to check that they're “doing the right things” – this is <a href="https://richardswinbank.net/adf/set_up_automated_testing_for_azure_data_factory#what_kind_of_test_am_i_talking_about">one description of</a> a <strong>unit test</strong>. </p>]]></description>
            <category>azure</category>
            <category>adf</category>
            <category>testing</category>
            <category>devops</category>
            <category>blog</category>         
        </item>    

        <item>
            <title>Testing Azure Data Factory in your CI/CD pipeline</title>
            <pubDate>Thu 21 May 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/testing_azure_data_factory_in_your_cicd_pipeline</link>
            <description><![CDATA[<p>
In my <a href="https://richardswinbank.net/adf/isolated_functional_tests_for_azure_data_factory">previous post</a> I used ADF pipeline parameters to implement dependency injection for ADF pipelines and build isolated functional tests using the NUnit testing framework. In this article, I integrate the NUnit testing solution into an Azure DevOps pipeline, so that I can run tests automatically whenever changes are made to ADF resources.
</p>]]></description>
            <category>azure</category>
            <category>adf</category>
            <category>testing</category>
            <category>devops</category>
            <category>cicd</category>
            <category>blog</category>         
        </item>   
        <item>
            <title>A tale of two smartphone apps</title>
            <pubDate>Tue, 12 May 2020 10:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/blog/a_tale_of_two_smartphone_apps</link>
            <description><![CDATA[<p>
This month's <a href="http://tsqltuesday.com/">T-SQL Tuesday</a> is hosted by Glenn Berry (<a href="https://glennsqlperformance.com/2020/05/05/t-sql-tuesday-126-foldinghome/">b</a>|<a href="https://twitter.com/GlennAlanBerry">t</a>). Glenn's invitation was “to write about what you have been doing as a response to COVID-19”, and it feels like my answer should be this: not much... 
</p>]]></description>
            <category>blog</category>
            <category>tsqltuesday</category>
        </item>        
        <item>
            <title>Isolated functional tests for Azure Data Factory</title>
            <pubDate>Wed, 06 May 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/isolated_functional_tests_for_azure_data_factory</link>
            <description><![CDATA[<p>
In this article, I'll look at isolating a pipeline from its external dependencies in order to test it independently and with a range of testing scenarios. I'll be trying to establish that a pipeline is “doing things right” in isolation – this is a <strong><a href="https://richardswinbank.net/adf/set_up_automated_testing_for_azure_data_factory#what_kind_of_test_am_i_talking_about">functional test</a></strong>.
</p>]]></description>
            <category>azure</category>
            <category>adf</category>
            <category>testing</category>
        </item>
        <item>
            <title>Automate integration tests in Azure Data Factory</title>
            <pubDate>Sun, 26 Apr 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/automate_integration_tests_in_azure_data_factory</link>
            <description><![CDATA[<p>
In my <a href="https://richardswinbank.net/adf/set_up_automated_testing_for_azure_data_factory">previous post</a>, I set up and ran one basic test of a single pipeline. In this article, I refactor my VS testing solution to make it easier to add new tests and to test new pipelines.
</p>]]></description>
            <category>azure</category>
            <category>adf</category>
            <category>testing</category>
        </item>
        <item>
            <title>Set up automated testing for Azure Data Factory</title>
            <pubDate>Mon, 20 Apr 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/adf/set_up_automated_testing_for_azure_data_factory</link>
            <description><![CDATA[<p>
Test automation allows you to run more tests, in less time, with guaranteed repeatability. If you change an existing ADF dataset definition for a new ADF pipeline, how do you know haven't broken something else? Automatically re-testing all your ADF pipelines before deployment gives you some protection against regression faults. Automated testing is a key component of CI/CD software development approaches: inclusion of automated tests in CI/CD deployment pipelines for Azure Data Factory can significantly improve quality.
</p>]]></description>
            <category>azure</category>
            <category>adf</category>
            <category>testing</category>
            <category>devops</category>
            <category>cicd</category>
        </item>
        <item>
            <title>Errors in script tasks and components</title>
            <pubDate>Sun, 26 Jan 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/ssis/errors_in_script_tasks_and_components</link>
            <description><![CDATA[<p>
Unhandled errors thrown out of SSIS script tasks are usually accompanied by the message “Exception has been thrown by the target of an invocation” – this isn't very informative! 
</p><p>
A more manageable approach is to wrap as much of your code as possible inside a <code>try</code>/<code>catch</code>, then raise caught errors to SSIS for cleaner failure and easier diagnosis. This article shows you how to do that.
</p>]]></description>
            <category>ssis</category>
            <category>script_tasks</category>
        </item>
        <item>
            <title>Managing extended properties</title>
            <pubDate>Thu, 02 Jan 2020 08:30:00 +0000</pubDate>
            <link>https://richardswinbank.net/tsql/managing_extended_properties</link>
            <description><![CDATA[<p>
<a href="https://www.mssqltips.com/sqlservertip/5384/working-with-sql-server-extended-properties/">Extended properties</a> are a means of attaching user-defined key-value pairs to database objects. This page provides some utility scripts to facilitate interaction with extended properties. 
</p>]]></description>
            <category>tsql</category>
            <category>extended_properties</category>
        </item>
    </channel>
</rss>