Career Thoughts: Never Would I Ever…

Today, Nov 1st, marks my 3 year anniversary at Pure Storage. And this milestone has put me into an introspective mood.

What Do I Want to Be When I Grow Up?

When I studied Computer Science at Marquette, I figured I’d wind up becoming a software developer of some sort. Or maybe a systems administrator. My first job out of college was as a generalist web developer & sys admin, so I got exposed to a ton of different things.

Then I specialized into SQL Server… first as an Jr. Ops DBA, but then shifted solely into the sector of being a T-SQL developer. I thrived there and figured I’d remain in that realm for the rest of my career.

Then after a decade-plus, I wound up finding myself back in an Ops DBA role. Never really thought I’d pivot back that way, since at the time, I barely even understood SQL Server clustering options, much less ever set up or managed an FCI or AG outside of a training class. Even so, after a few roles mixing Dev DBA and Ops DBA, I figured I’d remain solely in the SQL Server tech realm the rest of my career.

Then I got a call (or DM really)… and landed my first role as a Sales Engineer/Solutions Engineer for SentryOne. I never imagined I’d ever find myself in some kind of “sales” type role at all. And boy was it foreign to me. But others saw potential in me which is why I got recruited, and they were right, and I thrived there. And I figured I may remain in that realm the rest of my career.

Then 3 years ago, I joined Pure Storage. What was foreign to me re: Pure, is that I was never a hardware guy. I still remember sitting in “intro to storage” sessions at SQL Saturdays, just trying to wrap my brain around what the hell latency and IOPs all really meant. iSCSI & Fibre Channel were foreign to me as well. And I had rudimentary knowledge of VMware, virtualization, HA, and DR concepts and strategies.

And now, 3 years later, these are all things I talk about on a daily basis. Next week, I’m debuting a new session that I’m calling A Practical Deep Dive into I/O for the T-SQL Performance Tuner. It’s actually geared towards “Andy 5 years ago” who was definitely a T-SQL Perf Tuner but still not totally a hardware or I/O stack person. I’m still amazed at how far I’ve grown. And I’m still learning new stuff all the time!

TL;DR – Never Say Never…

The point to all of this, and this blog post, is to share with everyone how I’ve found myself in completely new realms throughout my career. And though today, you might think “I never could or would do that,” you really don’t know what life has in store for you. But if you have an open heart, an open mind, and an eagerness to learn, you absolutely can.

Thanks for reading.

T-SQL Tuesday #179: The Most Important Tool for a Data Detective

T-SQL Tuesday Logo

Welcome back to another edition of T-SQL Tuesday! This month’s edition is hosted by my good friend Tim Mitchell (b). In this month’s edition, Tim asks participants to share “What’s in your data detective toolkit?

The All Powerful…

… Question. That is what I now believe is the most important tool for a Data Detective.

Asking Questions Effectively

This nuance involves HOW you ask a question. Some of this involves knowing your audience. Is this the right place or the right time? Sometimes there comes a point where asking questions is just counter-productive because your audience has no interest in answering. And it also means you need to make sure you’re asking the correct audience in the first place.

Asking Effective Questions

This nuance involves structuring an individual question to get the answer that you are after. There are times you just want a Yes or No – so say that as part of your question. There are other times you want more details… sometimes you best state that up front too.

Framing

Another critical aspect to asking questions effectively and asking effective questions, is to properly frame the entire interaction. State WHY you are asking what you are asking. If it is a higher-stress situation, doing this effectively can help others understand that you’re trying to work towards a common goal of solving a problem.

Active Listening

Finally, the most important thing to all of this is learning how to ACTIVELY LISTEN. Too often, people will only hear one or two tidbits then immediately start thinking about what they want to ask or say or do next, rather than continuing to listen to the respondent. If you are talking just as much or MORE than the respondent, you’re not questioning or listening effectively.

Stop Jumping to Conclusions

One thing I’ve see all too regularly, is that someone will present a problem or a challenge, and people will immediately try to start to answer or address the challenge. But more often than not, not enough information is ever given initially. Why are you seeking to do this? What have you tried thus far? What is your desired outcome? What problem are you trying to solve for here?

It’s not good enough to just ask questions. Questions can sometimes put people on the defensive. Why are you putting me on trial? One must also learn how to ask questions effectively and learn how to ask effective questions. Yes, these are two distinct but critical nuances.

A Recent Example…

I enjoy spending time on Reddit and also enjoy answering questions on the r/SQLServer subreddit. Recently, someone asked about a data load that was “overwhelming the resources” and “slowing the API down to a level that’s unacceptable in production.” Most of the immediate respondents jumped in, assuming this was an on-prem SQL Server and trying to solve the challenge using only the initial information given.

Instead, I saw the necessity to dig deeper. The original poster (OP) then shared that this was an Azure SQL Database. Okay, that changes some of the rules here… so I dug deeper.

And eventually the OP stated that it was a Azure SQL DB on a single vCore, General Purpose tier! FULL STOP! I/O characteristics and potential is tied directly to the number of vCores one has allocated to an Azure SQL DB and well, one vCore is basically the lowest one can go. Might be fine for development, but how can one expect to run a Production application on a single vCore?!

So while other respondents gave great advice on how to tune a data load, none of it would do any good until OP actually increases the number of vCores of their Azure SQL DB!

Conclusion

If you’re reading this, you’re most likely already an inquisitive and curious type. As such, I hope this blog will encourage you to dig deeper and hone your skills around asking effective questions and asking questions effectively.

Thanks for reading!

PowerPoint Quick Tip: Notes Font Size

In my PowerPoint presentations, I like to use the Notes tab a fair amount. Not only can I put my own cheater notes in there, but I feel like it helps audiences who review my slides after a presentation as well.

A couple of years ago (yes, years… ), I encountered a peculiar behavior where the text in my Notes panel became SUPER TINY. I could not figure out how to correct this and it drove me bonkers, so I just resigned myself to it and left it alone.

Can you read that? I barely can…

Anyway, I’m starting my slidedeck for my 2024 PASS Summit presentation, and decided to procrastinate, er… revisit the Notes issue once again.

Why not re-start fresh?

Andy, why didn’t you just start a new PowerPoint? Because I’m extremely particular about formatting and layouts, whenever I create a new session, I just copy the PowerPoint from my prior most recent presentation. And I’ll make adjustments in how I like to do things, so these layouts are often “living evolutions” for me, which is why I did not want to start fully from scratch.

Solution 1: The Should-Have-Been Obvious One

These days, I rely a lot on Reddit for crowd-sourced information. And when I went digging this time around (as I didn’t use Reddit back when I last looked into this), I found one solution that is making me feel like an idiot.

Ctrl + Scroll Wheel to Zoom

Seems there’s an independent text zoom setting for the Notes tab, that is not visualized anywhere. And this is most likely what inadvertently caused the issue in the first place. Click on the Notes tab, Ctrl-Scroll and BOOM, zoomed in and out.

Commence facepalm for not getting this resolved sooner.

Solution 2: The Obscure Solution: Show Text Formatting

There’s a second solution that I learned about though. If you go to the Outline View, then right click somewhere on the left hand side, you’ll get a context menu option called Show Text Formating.

If you enable Show Text Formatting, then you can also make more adjustments to the contents of the Notes tab.

Happy PowerPointing and thanks for reading

Migrating SQL Server Database Files Between Storage Subsystems

In my role at Pure Storage, I often engage with customers who wish to migrate their SQL Server databases off of their prior storage onto our hardware. And after some digging around for prior-published material, I was surprised to find that there really wasn’t much that was comprehensive. After all, one doesn’t change SANs too often. But when it does happen, it is nice to have some reference material from others who have. So I decided to try and give a good overview of how I’d approach the challenge.

This is meant to be a “food for thought” kind of post. I’m going to keep things somewhat high level, but will provide links to other blogs and material that can help you continue down whatever path you choose. And for simplicity, I’m going to limit this scope to a single SQL Server.

Questions First

  1. Is your SQL Server virtualized or not?
    • VMware or another hypervisor?
    • If VMware, are your existing volumes RDM, VMFS, or vVols?
  2. How much downtime can you take for the actual cutover?
  3. Are you migrating your databases to a new SQL Server instance or keeping everything on the same instance?

VMware VM?

First, the easy one. If your SQL Server instance is on VMware VMFS or vVols, use Storage vMotion and call it a day. Are you one of the teeny-tiny percentage of folks running on Hyper-V? That’s cool – there’s storage migration functionality in there too, both for standalone Hyper-V VMs and Hyper-V VMs that are clustered and residing on Cluster Shared Volumes.

If you’re on RDM, know that from VMware’s perspective, RDMs are dead and vVols are the way of the future, so take this migration opportunity to consider changing that as well. One legacy reason SQL Server’s needed RDMs had to do with clustered storage, but now the capabilities needed for that are available in VMFS and/or vVols. Still have a Failover Cluster Instance with RDMs? Then check out this oldie but goodie blog for starters.

Cutover Downtime

The most critical question to answer next is how much downtime can you take for the actual migration? Do you have the luxury of being able to take an entire weekend because your workload is only Mon-Fri? Cool, you have it easy. More typically, I see teams that can afford a few hours of downtime overnight. And occasionally, I’ll hear “we can’t have ANY downtime!”

The long and short of it is this… your migration solution will most likely increase in complexity as the amount of downtime available to you decreases.

My Preferred Option

Generally speaking, I would pre-stage the databases via backup/restore to the new storage. If you are also migrating to a new SQL Server instance, use Log Shipping! It just works!

If you are remaining on the same SQL Server instance, you can use a “log-shipping-like” methodology. Present new volumes from your new SAN to the same instance and restore the databases side-by-side (named differently of course). Then if you want, you can use DIFFs and/or T-Log backups to keep that copy up-to-date. This makes the most sense if you say, leisurely pre-stage the databases 1-2 weeks beforehand, and say take a DIFF the morning of the cutover and restore that during the workday. Then at the time of cutover, you’d take a final transaction log backup (the tail), and restore that, and then swap database names. Of course, you can switch drive letters/mount points if you really want to as well.

Here’s a SQL Server Central blog that covers this in a bit more detail.

PowerShell Is Your Friend

I’d also strongly suggest making your life easier by using dbatools.io. If you’re on the same instance, check out Move-DbaDbFile. I like this one for same-instance scenarios as it’s stupid simple, but at the trade-off of taking each database offline during the operation (though there is a restore option too). But it handles metadata which is great.

And if you’re moving to a new instance, utilize Start-DbaMigration. If you’re new to dbatools.io, no worries! There’s a wealth of resources available – go search “dbatools” on YouTube and you’ll find a ton of amazing presentations showcasing how to use it, like this one from Jess Pomfret!

One other benefit to PowerShell… well, remember how I said this blog was only for 1 SQL instance? If you have a TON of SQL Servers to migrate, PowerShell is an amazing way to code up a solution and apply it to a ton of SQL Servers!

Filegroups – aka: I Didn’t Know I Could Do That!

Filegroups is one of those under-rated capabilities in SQL Server that I find most folks (myself included), don’t use. When you create a basic database, you get a data file, log file, and a single PRIMARY filegroup. But what you can do is create another filegroup and add underlying files to that file group. Then you can migrate existing data to the other filegroup. The cool thing about this option is that it’s fully online! The one drawback to this option is that it may not be as fast as one might like because it’s a size of data operation. But you remain online at least, so it can be done over a long period of time. There is one more quirk – it’ll introduce a ridiculous amount of external fragmentation. And remember that though external fragmentation is not as impactful like it once was spinning platters, it still negatively impacts read-ahead operations, so could negatively impact your workloads.

If you’re curious to learn more about this, check out Bob Pusateri’s blog post here. He presents a hybrid solution, because of specific requirements he had, but it’s extremely insightful on how to orchestrate this. And yes, this is a complex approach, but as I said earlier, if you must absolutely minimize downtime, a more complex orchestration will be your trade-off.

Thanks for reading!

Pure Storage FlashArray – Will a DELETE and/or SHRINK reclaim space?

In my current role at Pure Storage, I have the privilege of working with two amazingly smart, awesome SQL Server nerds; Andrew Pruski (b) and Anthony Nocentino (b). We often find ourselves facing interesting questions about SQL Server and storage, and today was no exception.

Andrew had a customer who wanted to know what happens on our FlashArray, from a space usage perspective, when they first delete a large volume of data in a database’s data file, then subsequently shrink the database’s data file.

DELETE Records

To properly answer what happens on FlashArray, one must look at all of the steps that happen in between.

First, what happens when SQL Server executes a DELETE operation against a ton of data? In a nutshell, SQL Server marks the data records and pages as ghosted and the ghost writer eventually comes up and marks the data pages for re-use. The data pages are not deallocated in the Windows file system, so no space is given back to Windows.

Aside: Individual record/singleton deletions are a bit of a different story, which I may explore in a future blog. Today, we’re only focusing on mass-deletion of tons of records/tons of data pages.

SHRINK DATABASE

So now that we’ve deleted a bunch of data, for whatever reason (good or bad) we want to run a SHRINK DATABASE to reclaim storage space. In a SHRINK operation, SQL Server will do is essentially physically move and consolidate data pages within each data file. If there are gaps within the data file, SQL Server will move data pages around, “smooshing” them together, then deallocate the newly freed space at the end of the data file. And in Windows on NTFS, this means that a deallocation is sent to the file system, which is essentially pushed down to the storage array.

To use an analogy, you have a large storage container (aka your data file) and little boxes (data pages) randomly sitting about inside. When you SHRINK, you’ll move boxes and stack them all in one corner. Then imagine you can use magic to shrink the larger storage container into a smaller one. So now your storage container takes up less overall space.

Aside: Paul Randal goes into greater detail in this blog post, along with why SHRINK is a bad idea

Aside 2: If you’re VMware virtalized on VMFS, there are additional caveats that will not be covered in this blog post today.

FlashArray

Within FlashArray, we do a variety of different things with the underlying data that is written to us. These operations include thin provisioning, data reduction algorithms, global deduplication, and compression. Capacity utilization needs to be thought of a bit differently on our array. And we use an algorithm called redirect-on-write, which I like to think of as an Append-Only or Insert-Only methodology for writing data (over-generalizing: we never update, just insert new).

So would sending a Windows NTFS deallocation to FlashArray, cause FlashArray to reclaim space?

Testing

To prove this out, I adapted Paul Randal’s demo script from his SHRINK data files blog, and added additional steps to check storage utilization on the FlashArray. You can review my published test script here.

To outline what I did, I created a new database whose data file was isolated on a dedicated volume (don’t care about the transaction log for this test). Then I used Paul’s methodology to create a table and hydrate it with a bunch of data. Then I created another table, hydrated that, then recorded my storage utilization from both SQL Server’s perspective and from FlashArray’s.

To begin the actual test, I dropped the first table then recorded FlashArray’s space utilization on the volume where the data file resided. Next, I rank DBCC SHRINKDATABASE, and then recorded space utilization again.

Results

Before executing the first delete, my database was 31.11 GB in size on FlashArray. After the DELETE, nothing changed. This is expected, because Windows did not deallocate anything – the SQL Server data file remained the same allocated size. Then after the SHRINK operation, which deallocated data, the size on FlashArray dropped to 15.94GB. Space was reclaimed on FlashArray.

Conclusion – TL;DR

When deleting records from a table, the space consumed is not impacted or reclaimed on the FlashArray. If you want to reclaim space, you must execute an operation to cause Windows to deallocate, which a SHRINK operation would do.

Thanks for reading!

T-SQL Tuesday #176: Advice to Yourself When You First Started

T-SQL Tuesday Logo

Welcome back to another edition of T-SQL Tuesday. This month’s blog party is hosted by Louis Davidson. His request for us bloggers is to share “What advice do you wish Current You could go back and give past you as you were starting your first data platform job?

Down Memory Lane

My first “data platform job” would technically be my first job out of college. I was working for a really small dot-com consulting company, wearing many hats as an internet app developer (ASP pre-.NET, Perl, and PHP baby!), and sysadmin for our Linux web servers. And back then, my first database was MySQL. We eventually got a Windows based customer, so I got sent to SQL Server 2000 training.

Back then, I definitely didn’t know what the hell I was doing. I was fresh out of college and remember feeling like I learned more in my first 6 months on-the-job than I had in the entirety of my college years. Now in hindsight, that’s not quite true, as the knowledge I acquired in each of those phases of life were very different but both uniquely useful.

The funny thing is… I know what I’d tell Andy 5 years ago… and 10 years ago… even 15 years ago. But 25 years ago, when I first started… I’m not quite sure what I’d tell to myself way back then! At least from a technical perspective. Why? Because I was in the early floundering stages of learning. And that’s okay. It’s no different than when one learns how to ride a bicycle for the VERY first time. Some might pick it up quickly, but most of the rest of us will struggle with it at first.

My Advice

My advice to someone starting their career, is to understand the learning process.

You will flounder. You will make mistakes. You will not be perfect from the get-go.

And that’s okay.

Keep at it. Be persistent. Be curious. Be resourceful.

While this reminder is valuable to any professional, no matter what stage of their career they may be in, I feel that reinforcing this to someone who is fresh and new is particularly paramount.

Thanks for reading!

T-SQL Tuesday #174: Your Favorite Job Interview Question

Welcome to another edition of T-SQL Tuesday! This month’s blog party is hosted by Kevin Feasel (b). Kevin also happens to be the maintainer of Curated SQL, which is an excellent resource to keep up on current trends! Anyway, in this month’s edition, Kevin has asked us to write about our favorite job interview question(s).

You Know Everything… Right?

In my personal opinion, a strong candidate for just about any position, isn’t one who happens to be a walking encyclopedia with a photographic memory. In fact, I really do NOT want a “know-it-all,” even one who truly DOES “know-it-all.” K.I.A.’s (a rather unfortunate acronym yes, but I’m feeling lazy so will keep using it) are oftentimes arrogant, lack empathy, and generally difficult to work with. They bring a library of knowledge to the table, and they know it… and unfortunately act accordingly. This has just been my unfortunate experience with K.I.A.’s, and there’s absolutely exceptions out there… and if you DO find one of those exceptions, well, make that exception!

Seek… and You Will Find Answers…

Anyway, I’d far rather find someone who is resourceful, especially when faced with something they do not know. So I will often ask questions that will test that resourcefulness. I’ll ask for stories or anecdotes, when a person got thrown into something and had to “figure it out.” What did they do? Where did they look for information? Did they turn to others? If so, who? I might also vary it, by asking hypotheticals. If you had to learn technology X tomorrow, where would you start? How would you go about it? What would be your learning process? And even better, have you had to do something like this recently?

Reading Between the Lines

The nice thing about questions like the above, is that one can also read between the lines and learn other things about a candidate. Are they excited when they answer? Do they have personal stories to draw upon or are they just giving generic answers like “I’d find a book.” Does being thrown into something unknown make them uncomfortable or excited? You may be able to glean what type of learner they are as well; some folks prefer to read, some prefer to dive in and get hands on first.

TL;DR – What Is Your Favorite Interview Question Andy???

One whose answer contains multiple insights. One where I can learn something about a person by their verbal answer, but also learn more deeply about them by their non-verbal answers.

Thanks for reading!

Quickie Blog – DBCC PAGE… WITH TABLERESULTS?!?

Welcome back to another quick blog, about something I just learned about.

Am playing around with Always Encrypted for the first time. I was just following along the basic tutorial and encrypted some columns in my AutoDealershipDemo database. But then I decided to go crack open the data page using my friend DBCC PAGE.

Text output of DBCC PAGE (15, 1, 1218323, 3)
Text output of DBCC PAGE (15, 1, 1218323, 3)

WITH TABLERESULTS

Because I only ever use DBCC PAGE() once every so often, I did a quick search to refresh on the syntax. And what I found was someone who used something I’d never seen before…

DBCC PAGE(aa, bb, cc, dd) WITH TABLERESULTS

“WITH TABLERESULTS”? What is this…?

DBCC PAGE(15, 1, 1218323, 3) WITH TABLERESULTS

WOW – for me, this makes the output SO much easier to consume!!!

And what of that encrypted data?

And if you’re curious, here’s what I was really after.

See how I had encrypted the VIN, InvoicePrice, MSRP, and DateReceived columns in my test table. Pretty cool to see it scrambled up and interesting to see how it consumed a heck of a lot more space in-row. 

DBCC Commands

Another quick search tells me that you can also use “WITH TABLERESULTS” with other DBCC commands too. I didn’t look much more into it since I got what I wanted, but thought I’d share this neat tidbit.

Thanks for reading!

Quickie Blog – Page Compression & Heaps

Found an interesting tidbit while working on a compression diagnostic script, and thought I’d write a quick blog.

Was refreshing myself on all of the output of sys.dm_db_index_physical_stats and the nuances, when I found an interesting tidbit about “compressed_page_count.”


For heaps, newly allocated pages aren’t PAGE compressed.

A heap is PAGE compressed under two special conditions: when data is bulk imported or when a heap is rebuilt. Typical DML operations that cause page allocations aren’t PAGE compressed.

Rebuild a heap when the compressed_page_count value grows larger than the threshold you want.

https://learn.microsoft.com/en-us/sql/relational-databases/system-dynamic-management-views/sys-dm-db-index-physical-stats-transact-sql?view=sql-server-ver16

Well then… moral of the story folks, is that if you have heaps that are PAGE compressed, you may not be getting the benefits you think you’re getting if you’re not executing rebuilds regularly!

Thanks for reading.

sp_helpExpandView and Triggers

Recently someone asked my wife Deborah if she knew if my sp_helpExpandView utility procedure could also work with triggers. Great question – I rarely work with triggers, especially nowadays, so I had no idea… and finally found some time and motivation to dig into it deeper.

TL;DR – How Does sp_helpExpandView Work with Triggers?

  1. If you run sp_helpExpandView whose parameter IS a trigger, you’ll get information about the objects that the trigger references. However, you will not get information about the table that the trigger is associated with (which could be derived from another query).
  2. If you run sp_helpExpandView whose parameter is a table that HAS a trigger, you will NOT get any information about that trigger being present. You’ll have to write a different query for that.
  3. If you run sp_helpExpandView whose parameter is a table that is UTILIZED BY a trigger, you WILL get information about that trigger.

Digging Deeper

To dig deeper, I set up a quick prototype with two tables and a basic INSERT trigger.

USE TempDB;

GO
CREATE TABLE dbo.TableWithADMLTrigger (
RecID INT IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
MyValue VARCHAR(50),
MyTimestamp DATETIME DEFAULT(GETDATE())
);

CREATE TABLE dbo.TriggerAuditTable (
RecID INT IDENTITY(1, 1) PRIMARY KEY CLUSTERED,
InsertedRecID INT,
MyValue VARCHAR(50),
MyTimestamp DATETIME
);
GO

CREATE OR ALTER TRIGGER dbo.tr_InsertTriggerForAuditing
ON dbo.TableWithADMLTrigger
AFTER INSERT
AS
BEGIN
INSERT INTO dbo.TriggerAuditTable (
InsertedRecID, MyValue, MyTimeStamp
)
SELECT inserted.RecID, inserted.MyValue, inserted.MyTimestamp
FROM inserted
END
GO

INSERT INTO dbo.TableWithADMLTrigger (MyValue) VALUES ('alpha');
GO 3

SELECT *
FROM dbo.TableWithADMLTrigger;

SELECT *
FROM dbo.TriggerAuditTable;
GO

Now, let’s try running sp_helpExpandView against each entity.

EXEC sp_helpExpandView 'dbo.TableWithADMLTrigger';
GO

EXEC sp_helpExpandView 'dbo.TriggerAuditTable';
GO

EXEC sp_helpExpandView 'dbo.tr_InsertTriggerForAuditing';
GO

The first two commands will yield nothing. Here’s the output of the third, against the trigger itself. Note that it shows you the dbo.TriggerAuditTable that is used by the trigger, but not the table that the trigger itself is defined on – dbo.TableWithADMLTrigger.

The reason for this has to do with the underlying code’s use of sys.dm_sql_referenced_entities(). There are actually two of these DMFs in SQL Server, sys.dm_sql_referenced_entities() and sys.dm_sql_referencing_entities(). Note the critical difference, referencED and referencING.

Confused? Me too – which is why I have to re-explain the difference to myself every time I revisit this topic. Hopefully this summary helps you.

  • Referenced Entities: Return all objects that are referenced by the object parameter passed in.

    ex: dbo.MyStoredProcedure that has a SELECT * FROM dbo.MyTable, dbo.MyTable is referenced by the stored procedure and would appear in the resultset from a query against sys.dm_sql_referenced_entities()
  • Referencing Entities: Returns all objects that are referencing the object parameter passed in.

    ex: If dbo.MyTable is passed, then dbo.MyStoredProcedure would appear in the resultset from a query against sys.dm_sql_referencing_entities()

Interestingly enough, not all objects are supported by both DMFs. See the Remarks section of the documentation of either DMF for a full table. But the consequence for sp_helpExpandView is that if you use sp_helpExpandView against tables that are both referenced by or referencing a trigger, it will not yield any output.

Partial Solution via sys.dm_sql_referencing_entities()

You can use sys.dm_sql_referencing_entities() to see triggers that reference a table. Here’s more example code.

DECLARE @TableName NVARCHAR(256) = 'dbo.TriggerAuditTable';


SELECT-- DISTINCT
DB_NAME() AS database_name,
@TableName AS table_name,
COALESCE(schemas.name, dm_sql_referencing_entities.referencing_schema_name) + '.' + dm_sql_referencing_entities.referencing_entity_name AS object_referencing_this_table,
COALESCE(schemas.type, NULL) AS referencing_object_type,
dm_sql_referencing_entities.referencing_id AS referencing_object_id
FROM sys.dm_sql_referencing_entities(@TableName, 'OBJECT')
LEFT OUTER JOIN (
SELECT objects.object_id, schemas.name, objects.type
FROM sys.objects
INNER JOIN sys.schemas
ON objects.schema_id = schemas.schema_id
) schemas
ON dm_sql_referencing_entities.referencing_id = schemas.object_id;

You can combine the above plus code that returns all triggers defined on a given table, to give you a bigger picture of trigger impact and relationships. Someday I may try to integrate this into sp_helpExpandView. Or I would absolutely welcome and credit anyone who wants to modify the code and issue a Pull Request on Github.

Thanks for reading.