Posted in PowerApps, SQL Server

Working with PowerApps

I mentioned a while back that I’ve had a chance to work on a PowerApps proof of concept. It’s a little unusual for a database person like me, but it makes sense for what we wanted – a relatively simple way to expose data were collecting in a SQL Server database to allow people to view and in some cases, update what we’ve been collecting.

Working through this, I hit all sorts of problems that took a lot of baby steps for me to work through. Some of these points required my understanding of PowerApps worked and some just took me being curious and investigating on my own.

The issues came down to:

  • Making database design decisions needed because of delegation limits and the inability to call a stored procedure to make things like complex selects or even writes easier.
  • A misleading error as to why I couldn’t deploy the app
  • Using PowerApps CLI to unpack and pack the code for deployment
  • Using different SQL Server instances for different environment

This is a proof of concept still waiting for final decisions for the go-ahead. Whether we use this or not, I feel like I learned a lot from these issues that was worth sharing. I’ll share of these lessons learned so stay tuned for the links going forward.

Posted in SQL Server, T-SQL

Implicit Conversion to Smaller Data Types?

I’ve been working on a project where I have to do a lot of interesting data manipulation. Unfortunately, dealing with implicit conversions has been part of the norm. So naturally, I managed to run across an interesting scenario that had me stumped, thinking that I ran into a strange edge case.

The tl;dr version is that this wasn’t an edge case or some undocumented issue or a bug or anything other than the db engine doing its job. I was looking at one scenario and missed the rest of the clues. But it’s worth sharing how these things are easily missed.

Continue reading “Implicit Conversion to Smaller Data Types?”
Posted in SQL Server, T-SQL, T-SQL Tuesday

T-SQL Tuesday #171 – Last ticket closed

Happy T-SQL Tuesday!

This month, our host is Brent Ozar (b). Brent’s mission for us: write a blog post about the last ticket you closed.

My initial reaction to this was: I’m on the development side of the house. I don’t close issues, I create them. 🙂

My second thought went to the type of tickets I get. The first type are the ones for new functionality and features. The second types are the issues, otherwise known as “bugs.” Some of them are caused by things I did, like an incorrect calculation, or are solving performance issues due to things I may not be able to predict, like cardinality estimator changes between SQL Server versions.

Continue reading “T-SQL Tuesday #171 – Last ticket closed”
Posted in SQL Server, T-SQL

TOP @n Estimated Rows – Part 2

When I originally posted about the estimated cost = 100, I didn’t expect there to be a follow up. But then Magnus Ahlkvist (LI), a fellow member of #TeamConstraints, had a great comment that was worth following up on:

Was the n-value sent to the proc as a parameter or did the proc have a declare statement where it was set? If it’s the later, it makes total sense to me, because at compile time the n-value is unknown to the optimizer. If it’s a parameter and not a local variable, the initial value would be available at compile time.

I have a few times used parametrised dynamic SQL in a proc just to get the local variable value into the plan, and at other times used an OPTIMIZE FOR hint in the query, using a value substantially bigger than 100.

From <https://www.linkedin.com/feed/update/urn:li:activity:7142522604854079491/>

Continue reading “TOP @n Estimated Rows – Part 2”
Posted in Blogging, Professional Development

Looking Forward

It’s the end of the year so it seems appropriate that there be a look back. But I’m going to take things back a bit farther than 2023.

I was going through old digital files and cleaning them up. (No really – I was doing digital clean up. I have witnesses!!!) And I ran across this gem of a document: “Deb_the_DBA persona”. I started it back at the beginning of 2012. I think there had been one or two SQL Saturday Bostons at this point but I wasn’t really going to user group meetings even though I knew when they were. But I was starting to keep track of the community and learning who to follow so I could learn more. I kept seeing others blog and their online “personas” and kept hearing those people say “start a blog!” and finally thought to myself: if I were to do this, how would I do that? 

Continue reading “Looking Forward”
Posted in SQL Server, T-SQL

Estimated Rows = 100

A little while back, my coworker was having trouble with a query and asked me about a strange thing they were seeing in the execution plan: The estimated rows for the query was showing 100.

100 felt like an awfully specific number. And there were two scenarios I knew of where the SQL Server cardinality estimator immediately used that value – table variables and multi-statement table value functions with SQL 2016 compatibility or earlier or as part of an APPLY where interleaved execution doesn’t apply instead of a JOIN.

The statement in question didn’t use either so what was the issue?

Continue reading “Estimated Rows = 100”
Posted in PASS Summit, Professional Development, Speaking

My Post PASS Data Community Summit Thoughts

Picture of the Welcome banner at PASS Data Community Summit, taken by Andy Yun

It’s more than a week post PASS Data Community Summit and I’m finally getting around to sharing my thoughts.

Me, Andy Yun and Jeff Iannucci standing at the lecture we presented at
From left to right: me, Andy Yun, Jeff Iannucci at the lectern

Personally, I had a blast. I was part of a Learning Pathway with Andy Yun (L | b) and our good friend Jeff Iannucci (L | b) called “Solving Real World SQL Server Problems”. We were thrilled with the number of people who came to listen to us and the level of engagement from the attendees. If you were there, thank you so much for being there! For those who filled out surveys or reached out directly with feedback, thank you! It goes a long way to helping all of us make these sessions even better. And of course, if you have questions, feel free to reach out and we’ll be happy to help.

It meant so much for me to be able to present in person. The two sessions I did this year were the first full length, in person ones that I’ve done at this conference. For the past 3 years, I have presented either virtually or pre-recorded. I don’t want to seem ungrateful for those opportunities because I was thrilled and honored to be a part of Summit; I know it’s not something everyone gets to do. And I hope those who attended those sessions got a lot out of them. But being able to present in person has been something I’ve been looking forward to doing for a long time. There’s something very different about the interaction you have with those who attend when they’re sitting in front of their individual computer screens, even if we’re all online at the same time. PASS Data Community Summit is the conference where I truly found my community and felt a part of #sqlfamily, so to be able to present face to face with attendees meant so much to me. Writing this out, it almost feels silly to make such a distinction, but I hope you understand why.

As always, this conference felt like a reunion with people in the data community. Being able to connect with those I see once a year or have connected with online is priceless. It was also amazing to see how many “first timers“ were there; I think they announced the number was around 43%. As much as I love seeing old friends, it was great interacting with new people. I hope they left with the feeling that they are now part of the larger #sqlfamily. Looking forward to seeing all of them come back next year

One of the reasons that we go to conferences is so we can learn, and I managed to catch a few sessions outside of our learning pathway. But there were so many others I wanted to support this year but wasn’t able to. I’m looking forward to seeing on the recordings when those are available. So many speakers, so little time…

Last year, the big announcement was about SQL Server 2022. This year, the “big” announcement in our space was that Fabric is now GA. It just wasn’t announced at Summit but a few blocks over at MS Ignite. (To be honest, they may have said it at Summit, but I feel like I saw more posts on social media from Ignite than hearing about it directly at Summit.) There were announcements made at Summit but none that felt as grand as last year. Maybe it’s because there wasn’t anything as big as a general release of the next version of SQL Server. We heard a lot about improvements for non SQL Server databases in Azure, general availability of a large set of Managed Instances features, a free Azure DB per subscription (preview), Azure DB hyperscale availability, and additional improvements for Azure Arc. Of these, Azure Arc definitely feels like a feature that I want to pay more attention to in general. Why is that? During the Redgate keynote, the preview of their survey of the database landscape and the instant survey during the talk showed that very few organizations were fully migrated to the cloud. Since Azure Arc can connect to on prem instances as well as databases in Azure, understanding some of how this works and the information and functionality it offers has promise. I don’t live on the admin side of the DBA house but it still is something I want to know more about. We’ll see if I’m able to dive in more.

An AI generated image of me as a superhero, courtesy of the Pure Storage booth.

Of course, AI was discussed. We saw different demos of Copilot, or AI built into new products, throughout the week along with a community keynote that highlighted the pros and cons. It will transform how we work so we need to be aware of the ways it will affect us. The biggest takeaway here was that AI is a tool and not a solution. I know we tend to confuse these things and treat tools as solutions (different rant for a different time), but if we understand the difference and how to use things like AI, we’ll be better off.

They’ve already announced the dates for PASS Data Community Summit for next year and I’m hoping to make it back. In the meanwhile, I’m going to turn my attention to some of the blog post ideas I’ve been floating around as well as other projects. And don’t worry, I’m going to try to find time to relax and catch up on all the shows that I’m way behind on.

Thank you PASS Data Community Summit! Until next year…

Posted in T-SQL, T-SQL Tuesday

T-SQL Tuesday #168 – Window Functions

Happy T-SQL Tuesday from #PASSDataSummit! With all of the prep for getting my sessions in order, I wasn’t sure if I was going to pull this together in time. Thanks to Steve Jones (b | t ) for hosting this month’s T-SQL Tuesday. Also, thanks to Steve for making sure this blog party keeps going!

Steve asked us to write about how window functions make our lives easier. I’m pretty excited about this one because I love windowed functions and the flexibility they add to our queries. The first time I was able to use one (other than ROW_NUMBER() ) for production, I was so excited and then became sad because there was no other DB developer on my team for me to share how exciting this was. In his invite, Steve said there’s bonus points for talking about LEAD and LAG. So it’s purely coincidental that this was where I was going be taking this post anyway.

Continue reading “T-SQL Tuesday #168 – Window Functions”