Beer, brats, unicorns, paper airplanes and 36 awesome sessions

I’ve been meaning to write my blog post on last week’s SQL Saturday experience so finally I got to work so here it is:

Bored Member
Bored Member

After the previous weekend marked a record with 5 (five) SQL Saturday events all around the world on Apr 14 it was time for Wisconsin to show what they can do. And boy did they put on a great event. The organizing team did a fabulous job, helping 227 hungry minds learn about SQL Server from 33 speakers in 36 sessions organized on 6 tracks. They almost made it look easy.

I remember 2 years ago meeting the person responsible for this at SQL Saturday #31 in Chicago. Back then Jes Borland (b|t) had traveled 160 miles to volunteer at the event. She enjoyed it so much that she came back the next year as a speaker and a year after that with the help of her great friends from MADPASS , she put on an epic first time event!  This goes to show a how important a single person can be in the SQL Family and anyone who thinks that they will never be able to raise to the level of some of the speakers has to learn a lesson from Jes. Every marathon starts with a first step (and she will run one in Kenosha the next week too ) and every little helps.

I left work Friday afternoon, after a pretty busy day with Vince (b|t) and we headed north to the land of beer and cheese. We got to Madison, checked in at our hotel and shortly after I was headed to the Speaker Dinner. The atmosphere was great, beer was good and the lasagna was too much for me to handle in one sitting. The highlight of the night was the paper airplane fight started by Aaron Lowe (b|t) and continued in part by me. We had a lot of fun unwary that above our head was a clothesline with underwear and shirts to complete the Italian experience.

The next day we woke up bright and early, had a frugal breakfast at the hotel and headed to Madison Area Technical College for the event, registered, grabbed  some coffee and a bagel and after finding the speaker room we chose Mike Donnelly’s (b|tSSIS: Figuring Out Configuring  session, his first SQL Saturday presentation ever. He did a great job with a demo packed session that had a very good audience.

Unicorns and gang signs w/ @StrateSQL
Unicorns and gang signs w/ @StrateSQL

Next we decide to take Erin Stellato’s (b|t) DBCC Commands: The Quick and the Dangerous.  She did a great job condensing all DBCC commands in one session. I could not help not to drop the ” unicorns will die if you shrink the database” slogan and the room liked it. The same room hosted an extraordinary fun  session on what happens when you think outside of the reporting box.  Stacia Misner’s (b|t) (Way Too Much) Fun with Reporting Services  was spiced up with a lot of humor with the help of her daughter Erika Bakse (b|t) . They started playing Words With Friends inside a SSRS report and then went under the covers to explain how it is done and mainly how to use Report Actions in SSRS to create interactions inside a report.
It was  lunchtime and the menu included tasty brats, burgers and Cows of a Spot tables, the Wisconsin version of Birds of a feather. I sat at the Data Profiling table and had a great conversation with Ira Whiteside (b|t) and his lovely wife Theresa Whiteside. They where very kind to share with me and Vince some of the experience accumulated during the course of  their vast career working with Data Profiling and Data Quality  as well as some of the methods that were presented in Ira’s session that started right after lunch. It was a extremely interesting session titled Creating a Metadata Mart w/ SSIS – Data Governance and we learned a lot from it.

It was time for my session on Optimizing SQL Server I/O with Solid State Drives a session that seems to be very popular (same session was selected for SQL Saturday #119 in Chicago as well at Washington DC and Tampa before). I had some great feedback that I will try to use to improve the format and add some fresh content (whiteboard on TRIM, Wear Leveling and Bad Block Management). I had the honor to have in the audience Norm Kelm (b|t), Craig Purnell (b|t) and Matt Cherwin (t).  We did an impromptu drawing for a Ted Krueger’s book (signed by Ted himself) and we stayed in the same room for the last session of the day with Sanil Mhatre (b|t)on  Asynchronous programming with Service Broker . He did an excellent job and it was his first SQL Saturday presentation ever.

It was a great day and an epic event by MADPASS which made me look forward to the next SQL Saturday in Madison.




Speaking at SQLSaturday #118 in Madison,WI

I am happy to announce that my session on “Optimizing SQL Server I/O with Solid State Drives” was selected for SQL Saturday #118 in Wisconsin’s capital, Madison on Saturday April 21, 2012!

Location is:
Madison Area Technical College – Truax Campus
3550 Anderson St, Madison, WI 53704

SQLSaturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Admittance to this event is free, all costs are covered by donations and sponsorships. Please register soon as seating is limited, and let friends and colleagues know about the event.
If you are attending please stop by and introduce yourself. Also if you are on Twitter don’t forget to use hashtag #sqlsat118 and see what others are doing.

Hope to see you there!

Speaking at SQLSaturday #110 in Tampa,FL

I am happy to announce that my session on “Optimizing SQL Server I/O with Solid State Drives ” was selected for SQL Saturday #110 in Tampa, FL  on Saturday March 10, 2012!

Location is:
1001 East Palm Avenue,
Tampa, FL 33605

SQLSaturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Admittance to this event is free, all costs are covered by donations and sponsorships. Please register soon as seating is limited, and let friends and colleagues know about the event.
If you are attending please stop by and introduce yourself. Also if you are on Twitter don’t forget to use hashtag #sqlsat110 and see what others are doing.


Due to a family emergency, I had to cancel this speaking engagement and luckily my slot was filled by another speaker.I hope to make it next year!

SQLskills Community Mentoring and me

As some of you might be aware Paul Randal (blog | twitter) has just announced the launch of a new SQL Skills Community Mentoring program for a few individuals that took part in the SQL Skills Immersion Events. I’m deeply honored to be one of the select few that get access to invaluable advice from what I consider the best team in the SQL Server world.LukeAndPaul

Allow me to go back to a year ago (and boy what a great year 2011 was), when I could only dream of getting trained by Paul Randal and Kimberly Tripp, but my perseverance paid off and after multiple requests,  me and my buddy Vince managed to convince the company to send us to the first week of Immersion training in May this year especially since it was in the western suburbs of Chicago. Very soon I realized that this is by far the best training I have ever received, period. The amount of content was overwhelming (as Brad put it “it’s like drinking from a proverbial fire hose”) but the presentation style was extremely engaging and well prepared. Paul and Kimberly don’t just tell you “this is what you should/shouldn’t do” they go beyond that and explain why and how  and then they demo the whole concept. Just like a good teacher they can take a difficult topic  and present it in a way that makes it easy for you to follow and absorb  a lot of  information.
On the fourth day of the IE 1 class I volunteered to present a slide deck. I was extremely embarrassed as I felt like I was not in the same league as the rest of the folks present, but everything went so well that it made me reconsider my thoughts. As a matter of fact it made me submit  a couple sessions for SQL Saturday #82 in Indianapolis and this was the beginning of my public speaking adventure. A few days ago I actually presented the same slide deck from IE1 at SQL Saturday #96 in Washington, DC and the feedback was excellent.

Last month I continued my learning odyssey with the second week of training (IE2). Right away  the fact that we attended IE1 made a huge difference and it helped us understand a lot better the subjects presented to us. This time it wasn’t just Paul and Kimberly but Jonathan and Joe  have joined them adding their technical skills and expertise to the table.   I learned so many new things that I have already used in my environment.

To conclude, these classes have helped me to understand significantly better how SQL Server works. Things like PLE, forwarding records, heap fragmentation, index tuning, page structure, SAN performance, waits and stats, extended events, NUMA architectures, MCM program are only a few of the subjects that were explained in great detail.

Now on the last binary day of  the century all this becomes even better by having the unique chance to have someone like  Joe Sack as Mentor and working with him to advance my career. It’s been an incredible year for me and I look forward to work with my Mentor Joe Sack and  see what the next 6 months will bring !

Once more, thank you SQLskills. It is an honor to be in such good company.



We are family!

This is the second time I’m writing a post in the  “Meme Monday” series and this time it originates from Tom LaRock (blog | twitter).Tom makes coffee The topic is : What #SQLfamily means to me.

This weekend I participated in a #SQLfamily reunion in Washington, DC. Some might argue that this is called SQL Saturday #96 but I prefer to see it as a family reunion because this is how it felt. It started with a #SQLfamily dinner where we had a good time told stories (some would even make a good Agatha Christie book)… but I digress. We all sat around the table and felt like we are part of a family. I met Tom LaRock, Joey Dantoni, Karen Lopez, Andy Leonard, Brian Moran, Aaron Nelson, Matt Velic, Chris Skorlinski, Wayne Sheffield, Sebastian Meine and many others. I truly felt like a family reunion and it was.
On Saturday we continued, had fun presenting our sessions, learning new things, hanging out in the speaker room and talking about all kinds of things. Everybody contributed to making this  #SQLfamily reunion an event to remember. Tom even offered to make coffee and but had no idea what to do with the extension cord. (…after the last power outage they had in New England you would think he should know better 🙂 )

Looking back I realize that every #SQLfamily event I have participated to has made me feel like this whether it was a SQL Saturday in Indianapolis or in Chicago or a user group meeting I never felt left out. I had my brothers and sisters and uncles and aunts and they made me feel like I am part of the #SQLfamily.

And the best part is that our #SQLfamily is growing every day. Every reunion we discover more people we can relate to even though we are not related (but in a way we are).

God bless our #SQLfamily!

Presenting 2 sessions at SQL Saturday #96 in Washington, DC

I am deeply honored that I was chosen to present 2 sessions at SQL Saturday #96 in Washington, DC this Saturday,   November 5, 2011!

Location is:
Microsoft MTC
5404 Wisconsin Ave, Suite 700
Chevy Chase, MD 20815

View Larger Map

Subway: Friendship Heights Metro Station

SQLSaturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Admittance to this event is free, all costs are covered by donations and sponsorships. Please register soon as seating is limited, and let friends and colleagues know about the event.
If you are attending please stop by and introduce yourself. Also if you are on Twitter don’t forget to use hashtag #sqlsat96 and see what others are doing.

The 2 sessions I will be presenting are :

Optimizing SQL Server I/O with Solid State Drives and Join operations – Explaining Merge,Hash and Loop

Hope to see you there!


Update: 11-09-2011

I have uploaded the slide decks to the session pages:

Thanks to all that attended and don’t forget to leave feedback at:

Speaking on SQL Server Algebrizer at SQL Saturday #82 Indianapolis

I was selected to present a session Back to basics – How does the Algebrizer work  has been selected for  SQL Saturday #82 in Indianapolis, IN on Saturday,   June 25, 2011!

Location is:
University of Indianapolis-1400 E Hanna Ave
Indianapolis, IN 46227

View Larger Map

SQLSaturday is a training event for SQL Server professionals and those wanting to learn about SQL Server. Admittance to this event is free, all costs are covered by donations and sponsorships. Please register soon as seating is limited, and let friends and colleagues know about the event.
If you are attending please stop by and introduce yourself. Also if you are on Twitter don’t forget to use hashtag #sqlsat82 and see what others are doing.

Hope to see you there!

What’s the big difference between Oracle and SQL Server indexes (Part I)

This entry is part 1 of 1 in the series SQL Server vs. Oracle

Last week Erin Stellato (blog| twitter) has asked a very interesting question on Twitter that triggered this post:

So how is it you can have good perf in Oracle, w/o IOTs, but in SQL Server, everyone says need CIs for good perf?

There are two aspects to that question

  1. What is the difference between index implementations in Oracle vs. SQL Server?
  2. How do you define good performance?Peeling an Onion

Until now I have not realized that this subject is not very well covered and since it involves diving into the internals  I offered to make an attempt to fill this gap over the Memorial Day long weekend and soon realized that there is more to this than I originally thought it is.  This will be the first post in a set that I am planning to write in the next weeks.

First, let’s begin by comparing the different types of indexes supported by Oracle and SQL Server:

Type of index SQL  Server 2008 R2 Oracle 11g
B-tree indexes Yes (Non-clustered) Yes (Default)
B+ tree indexes Yes (Clustered Indexes) Yes (IOT)
Function Based Indexes Yes (Column has to exist in table) Yes
Filtered Indexes Yes Yes
Hash cluster indexes No Yes
B-Tree cluster indexes No Yes
Bitmap Indexes No Yes

So both Oracle and SQL Server support B-tree indexes which are ordered lists of key values, associated with the storage location of the table record that contains the respective value. So far it doesn’t seem to be a big difference between the two RDBMS. Both also support B+ tree indexes although there are differences between the two implementations. B+ trees are similar to B-trees but the table records are stored in the leaf nodes of the primary key index. This allows  fast access for singleton searches (exact match) or range searches on the primary key of the table.  Microsoft calls them clustered indexes (CI) while Oracle uses the term Index Organized Tables (IOT).

Let’s look at non-clustered indexes which are the default indexes in Oracle. Here the two implementations are quite similar: Indexes are using a B-tree structure that contains  the key values and a unique reference to the storage location of the record. This is where the implementations start to separate:

In Oracle every record has a unique (inside the table) 10 byte pseudocolumn ROWID representing the physical location on disk that the record is stored. When printed, each ROWID is displayed in BASE64 format (A-Za-z0-9+/): AAAAaoAATAAABrXAAA
ROWID has a four-piece format, OOOOOOFFFBBBBBBRRR:

  • OOOOOO: The data object number that identifies the database segment (AAAAao in the example). Schema objects in the same segment, such as a cluster of tables, have the same data object number.
  • FFF: The tablespace-relative datafile number of the datafile that contains the row (file AAT in the example).
  • BBBBBB: The data block that contains the row (block AAABrX in the example). Block numbers are relative to their datafile, not tablespace. Therefore, two rows with identical block numbers could reside in two different datafiles of the same tablespace.
  • RRR: The row in the block.

Users can use ROWIDs directly for several functions:

  • Rowids are the fastest way of accessing  rows.
  • Rowids can be used to see how a table is organized.
  • Rowids uniquely identify rows in a given table.

There are two completely different perspectives when migrating from one RDBMS to another:
Most Oracle users have no concept of clustered or non clustered indexes because primary key indexes created in Oracle are balanced non-clustered indexes that include ROWID for fast access.

A common mistake that Oracle users make is assume that  the indexes are implemented the same way  in SQL server and underestimate the performance hit that comes with using a large composite primary key in SQL server on a Clustered table. It takes a while to understand that the non clustered index size  index size in SQL server can be exponentially bigger in SQL server and is not necesarly proportional to the size of the columns specified in the non clustered index. Based on this, some Oracle experts say that the clustered indexes in SQL Sever have a 110% overhead because index key is stored “twice”, once with the index entry in the intermediate index nodes – and again with the row data at the leaf level.

Another performance hit can come from the  default value of the free space (PCTFREE) that is left on a page  as Oracle leave by default 10% space on each block (at 91% block is split). After a data block is filled to the limit determined by PCTFREE, Oracle considers the block unavailable for the insertion of new rows until the percentage of that block falls beneath the parameter PCTUSED.

The alternative of using a non clustered table (heap)  is not recommended in most cases  and Paul Randal ( blog | Twitter )  has some excellent posts on this subject  as well as Erin’s recent blog post on Internals and Deletes for SQL University which I highly recommend reading.

On the opposite camp, most SQL Server users are looking for an alternative to clustered indexes and end up choosing to use Index Optimized Tables (IOT) instead of using simple tables just because they search for something that has the same structure as clustered tables in SQL Server in order to have comparable performance.

In the next part of my series I will continue with access methods in Oracle, and other index types.

Using CONVERT with COLLATE to strip accents from Unicode strings

Today Mladen Prajdic  posted a on Twitter a SQL statement that captured my attention: (

 CONVERT(varchar(50), N'æøåáäĺćçčéđńőöřůýţžš')
 COLLATE Cyrillic_General_CI_AI
returns aoaaalcccednooruytzs

If you notice the conversion to Cyrillic_General_CI_AI collation his shows an interesting mechanism for code page translation that we could not explain:

Why does Cyrillic_General_CI_AI or   Greek_CI_AI collation successfully convert accents to their base ascii chars?

 I found an answer to this on this KB page:

What Happens when a Character on One Code Page is Unavailable on Another

 If a conversion occurs and a character on one code page is unavailable on another, the character gets converted to its closest equivalent character on the other code page. For example, ASCII 224 (“alpha”) on code page 437 is converted to 97(“a”) on the ANSI code page. When you convert it back to code page 437, you receive 97(“a”). This is a problem, because you have now lost the original 224(“alpha”). This can potentially cause incorrect data to be saved on your server, as happens in the following two examples:

Example 1:

You have an OEM server in which you are inserting data from an ANSI client, and your AutoAnsiToOem option is enabled. If the character that you are inserting is valid in both code pages, the correct data will be saved. However, if the character is valid in ANSI but not in OEM, the correct data will not be saved.

Example 2:

You insert data into an ANSI server from an OEM client, and the AutoAnsiToOem option is enabled. If the character that you are inserting is valid in both code pages, the correct data will be saved. However, if the character is valid in OEM and not in ANSI, then correct data will not be saved.

 Nevertheless I this is very useful to exotic strip accents from strings (i.e before an export).


SQL Azure – Alternative to what

After attending a presentation on SQL Azure by Microsoft’s George Huey an interesting topic came to my mind in regards to the actual use for a cloud based database that has over 1400 incompatibilities with SQL Server: What’s the use for this since it is not a viable alternative to a production database.
I would say that it is an alternative to MS Access!!!
Let me explain that. I definitely see the use for it when you need to deploy a solution quickly for an event or conference and you need a reliable solution for the backend of your application that fits a tight budget and an even tighter schedule or even as George suggested for a startup that has no money for a datacenter hosted db environment.
in these situations deploying a 1GB database to host non-confidential information for under $10/mth seems a no-brainer. But I would not recommend this when HIPAA compliance is required for instance.

My thanks to George for a great presentation.

What three events brought you here?

Recently Paul Randal (aka Mr. Tripp)  started a web version of the tag game in the SQL Server blogging community  on what were the three most important events that shaped your career and got you to this point in your life. It is a very difficult question because unlike data, life events do not accept a SELECT TOP 3 … ORDER BY [ImportanceRating]. The sorting function is a subjective one for humans in the case of life changing events and it depends on the impact that one event had on the individual. It is a lot harder than most would think and I spent the weekend thinking about this and trying to review what were the most important events that shaped my life and career. Here’s what I came up with:

1) My FamilySinclair ZX Spectrum+

As I grew up in a communist Romania I was one of the fortunate kids that had access to technology thanks to my parents, both University Computer Science professors. My father is still teaches Databases from the 101 course to Oracle PL\SQL  and also Assembly Language. My mom is now retired but she used to teach the Fundamentals of Computer Programming  to students in the freshman year. As a kid I got to see all the  different generations of technology from things like ferrite memory to BASIC programming on Romanian replica of Sinclair ZX Spectrum+ Z80 home computer. I have thank my parents for explaining to me how things work and as a very inquisitive boy I absorbed all that information without realizing that one day this will help me understand the systems as a whole and see the big picture where others only see the frontend. The first database I worked on was dBase III the precursor of FoxPro (anybody who used it understands why some people call database tables “files”). The interesting side to this is that while I was hooked on computers, my sister decided to go to Law School to finally land in the wireless communications industry.

2) Discovering relational algebra

During third year in the Politehnica University I found an opportunity to apply for a EU scholarship in the frame of Erasmus Programme. I remember that one of the requirements was to be fluent in French and because of that there were only 3 or 4 students that applied and I was selected. I chose the “DB and Artificial Intelligence”  major at “Universite de Paris 6 – Pierre & Marie Curie“  and one of the courses was  “Relational Databases”. We started from the basics of the relational algebra operations and representing query plans as a tree  that can be transformed, all of which which I was fascinated by. I kept my interest in this when I returned home after my year in Paris, I got my Software Engineering MS in “Database Systems Optimizations”.

3)Real world challenge

After my graduation I  decided to continue the family tradition and started teaching Labs on “Programming Languages” and “Relational Database Design” at the  Politehnica University. A few years later I  decided to get a job as Oracle DBA. The first day on the job they gave me a desktop and some form of an install manual for Oracle and their product  and asked me to read it and ask around if I had questions. Two days later, to everybody’s surprise I had their product installed and running on Oracle 9i while it was designed for 8i. This was one of the first moments when I realized that I like challenges and that I can live up to the expectations.
A few years later came a moment when I realized just how lucky I am. It was the moment when I found out that I was one of the one selected for the Diversity Visa program to get a Green Card  and move the US.  

From that point forward I did not find it hard to get a job (I got my first job in US in three days) but I learned that the most important thing is to find the right job and the right employer that would challenge me just like Brent Ozar was writing in his post on the same subject:

I want to be a successful employee in my employer’s eyes, but when I take a job, one of the questions is, “One year after someone’s taken this position, what does success look like?  What is the best employee doing?  How are you rewarding them for what they’ve done?” 
In IT, this question takes people by surprise, but the answers reveal a lot.

Until this day I can proudly say that I have exceeded all my employers’ expectations but I cannot say the same about all the jobs I held.

I’m going to tag Ted Krueger, Jorge Segarra, Mladen Prajdic and Jonathan Kehayias to see how they are answering this question. The list does not have to end here. If you would like answer this question please link back to this post or the original post by Paul.

Query optimizer pains

As requested by Yavor Angelov from MSFT via Twitter im writing down what seem to to be my biggest pains with QO in SqlServer.

Whenever you have a a join between large tables QO will look at the indexes that exists on the join keys. If some indexes are missing it seems that the default join type chosen is Nested Loops which is OK for fast results but is really slow if the memory is scarce on the server or there is a high load.
Also this is hard to execute in parallel.
Usually in this situation I have to look at the execution plan and suggest the proper plan to QO by creating missing indexes.
I think creating a temporary index if the cardinality of the join tables is high would help a lot to change QP from nested loops to hash joins which can take advantage of multiple processors. Also the time to create the NC index that is missing is also very low.

A lot of times it seems that the QO is not looking at the statistics to apply Selection first on small and limit the resultset that has to be joined with the other tables to a small number of rows.
The QO should calculate QP cost using different methods and then pick the QP with the smallest cost. This does not seem to happen very well especially when there is a mix of small and large tables to be joined.

To test these scenarios you can use a Virtual Server with minimum memory resources and you’ll see that the differences between QPs become very evident.