Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #195 – How Has Your Code Aged?

It’s another T-SQL Tuesday! Our host this month is Pat Wright (blog). Our challenge this month is to think about how our code has aged over time.

T-SQL Tuesday logo

This is an interesting question. In some ways, it’s hard to say how my code has aged. no one is yelling at me that my code is breaking production, so that’s a good sign. I have definitely talked about times where I wish I could go back and do something differently because it would have been a little bit smarter to do. I’m no longer at some of those companies and one of them doesn’t even exist in that incarnation anymore so I have no clue how that code is being used or if it even got fully implemented. I’ve done a lot of reviews of other people’s code and the most problematic legacy code was the code that would have been problematic at the time it was written but was good enough to get by.

Continue reading “T-SQL Tuesday #195 – How Has Your Code Aged?”
Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #194 – Bad Mistakes, I’ve made a few…

Happy 2026! I’m jumping into the new year by participating in T-SQL Tuesday. Louis Davidson is hosting this month and you can see his full invite here. This month, he’s asking us to share a mistake we’ve made and the lessons we learned from it.

T-SQL Tuesday logo

Let’s be honest – there are just too many mistakes to talk about it, which is why I chose the quote from the rock band Queen as my post’s title. Those of us who are speakers and bloggers probably already have at least one session and/or blog post about what they’ve learned from a past mistake, usually multiples of each.

Continue reading “T-SQL Tuesday #194 – Bad Mistakes, I’ve made a few…”
Posted in Database Development, SQL Server, T-SQL Tuesday

T-SQL Tuesday #192 – What I’m excited for with SQL Server 2025

T-SQL Tuesday logo

Happy T-SQL Tuesday! It feels like it’s been a while since I was able to put one of these posts together. So thank you to Steve Jones for this month’s topic, which makes it really easy to write a post and join the party. Steve asked to talk about what we’re excited about with SQL Server 2025.

When I look at this release, I feel like I’ve been more tuned into what’s coming out than any other. I’m still not sure how that happened. But I think overall it’s a good thing because there really is a lot being packed into this release. If you’re just trying to figure out now, you’re already behind.

Continue reading “T-SQL Tuesday #192 – What I’m excited for with SQL Server 2025”
Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #186 – Managing SQL Agent Jobs

Happy T-SQL Tuesday! This month, Andy Levy (bsky | blog) has asked us: How do you manage and/or monitor your SQL Server Agent jobs? (Check out the full invite here.)

T-SQL Tuesday logo

When I first got started as a DBA, I tended to think of SQL Agent jobs as more of an operational tool – you use it to just back up your database, run index maintenance, etc. But I found myself using SQL Agent jobs more and more for various uses: I need to add an index to a large table as part of a deploy but the installer connection string only allows for 30 seconds and it will take significantly longer. So I can have my install script create a job that will run one time during a downtime to add that index. We need to have functionality that sends messages to Service Broker queues that then have to be pulled down and processed as a way to simulate real-time analytical cubes and dimensions: Create SQL Agent jobs as part of the deploy that sets up and runs the process if the system is configured to collect the data. I probably have a bunch of other examples like these. If your application relies on the database processing something and working, using agent jobs is a great option.

How do I manage this all?

Continue reading “T-SQL Tuesday #186 – Managing SQL Agent Jobs”
Posted in Blogging, Professional Development, SQL Server, T-SQL Tuesday

T-SQL Tuesday #185 – Video Edition

Happy T-SQL Tuesday! This month (April 2025), Erik Darling asked us to record a video. Enjoy my take!

T-SQL Tuesday logo

Posted in T-SQL Tuesday

T-SQL Tuesday #184 – The Invitation

Happy T-SQL Tuesday! I’m excited to be this month’s host.

T-SQL Tuesday logo

Our topic this month is one that’s near and dear to my heart: Mentoring and Sponsorship. For those of you who don’t know, I am a co-founder of a mentorship circle, WITspiration, and have been a mentor as part of New Stars of Data many times. And I know I’m not the only person who is passionate about this topic in our community.

Continue reading “T-SQL Tuesday #184 – The Invitation”
Posted in T-SQL Tuesday

T-SQL Tuesday #182 – Data Integrity

It’s the first T-SQL Tuesday for 2025 and I feel like it’s a good time to get back to participating, since it’s been awhile for me. My friend Rob Farley (blog | Bluesky) has a great topic for us this month. He invites us to talk about integrity, applying it to whichever topic we see fit to.

I want to talk about it in terms of the data itself. I know that doesn’t narrow things down at all.

The first way to think of data integrity is a very small and literal interpretation. This is making sure that our data in the database is good. In many ways, these are easy to enforce – you add constraints. Primary Keys ensure that you know what makes each row unique. Unique constraints represent what would make each record unique if the primary key constraint, which is often a surrogate key these days, didn’t exist or offer different options. Together these help enforce the database design – are tables properly normalized or in a star schema? Then we can start to enforce the integrity of our relationships between tables using foreign keys. Making sure the data that is supposed to exist with NOT NULLs, although I suppose there could be some debate about calling NULLs a constraint. We can enforce the values of data with CHECK constraints. And let’s not forget the importance of choosing the correct data type to help make sure the right data is stored in the table, because no one should be storing calendar dates in character string field and expect no issues.

To me, these are the simplest and most fundamental ways we can make sure our data is good and has integrity. By integrity, it means we can trust the data. The added benefit is that most of these will also help SQL Server come up with better execution plans and help performance.

(As a side note, I actually have a presentation on this very topic.)

The default constraint may be the one exception. But I think it plays another important role, which leads into the other type of data integrity. The default help defines the columns by saying what the data should be when it’s first created. It indicates a lifecycle for the data, more particularly that column. Sometimes that data may not change, for example the date and time a record is first created. But sometimes it could indicate the starting point for data such as a status field, where it will change through the lifecycle to go from New to Open to in Progress to Resolved. So while a default may not be about the integrity of the record directly, it does speak to the greater integrity of the data by helping define the column and table in extension.

This leads into the bigger concept of data integrity which is making sure you have good definitions for your data and understanding its lifecycle – or how it moves and changes through the application. Our data is constantly being moved from one system to another. If you don’t believe me, look at the job description of data engineer. And this makes it even harder to ignore the fact that source and target destinations of our data isn’t necessarily a database, or even the same RDBMS if the data is moving between different databases. Throw in the modern software architecture design which says everything should be done as micro services and every micro service should have its own database. So services which are related and use the same base data, such as people who could be classified as students, teachers, staff or parents or a combination of these, may end up being stored in separated databases or have their data split among those multiple databases with the rest being stored in yet another database for the general concept of a person. It begs the argument: is the data in this case meant to be literally split into multiple databases (with no cross database joins) or could schemas in the same database solve the problem because we could still create the basic reference to help enforce that integrity and we’re not having to make a lot of unnecessary calls to get the data we need? It would also help solve the issues of how do you keep data consistent across the multiple systems and prevent the proverbial Telephone Game, where the message at the end isn’t the same as the one at the beginning. We still have to look at the data as a whole when we try to figure out how to split or move the data between systems.

One of the ways we can fight this is by clearing defining and documenting our databases. In the micro services structure, teams must stay on top of this by creating the data contracts for the apps and making sure that it stays up to date. This may also mean working across teams through the entire organization to ensure that everyone has the same idea of what defines a staff vs a teacher. For data engineers, it’s making sure there’s clear lineage so we can see where the data goes; we can make sure calculations and transforms are understood by everyone who needs to consume the data.

It all comes down to, while it may appear we work in silos – whether from an application team to the different analytics databases for reports, it’s actually the exact opposite where we almost need to overcommunicate what we do to make sure that we can keep our data intact. Documentation becomes more important to the work we do. And when our data does land in the database, the very least we can do is add our constraints. Our normal forms, star schemas, and better execution plans will thank you for that. #TeamConstraints

Looking forward to seeing how integrity gets interpreted in other ways across the data platform. Thanks again to Rob for a great topic!

Posted in T-SQL, T-SQL Tuesday

T-SQL Tuesday #179 – My Data Detective Toolkit

Happy T-SQL Tuesday! I wasn’t really sure I’d be able to crank something out for this one but somehow I managed to squeeze it in. Tim Mitchell ( b ) is hosting and he has a great topic for us: What’s in our Data Detective toolkit?

I love this topic for so many reasons. Partly because I feel like I’m asked to look at so many projects where I’m dropped in and asked to figure things out, usually performance related but occasionally new functionality or features. But as I’m asked to do this fairly often, I may have to see if Data Detective can be my new title… hmm…

Continue reading “T-SQL Tuesday #179 – My Data Detective Toolkit”