RSS

[SQL Snacks Video] SQL Server Table Partitioning 103

28 Apr
[SQL Snacks Video] SQL Server Table Partitioning 103

Welcome back for part 3 of my SQL Snack Pack on Table Partitioning! If you have not watched the first two videos, I would highly encourage you to do so.

In this third video I will discuss the following:
How Partitioning can be beneficial in helping with blocked resource contention
How Partitioning can be useful during a disaster

Code for Query Window 1:

--Partitioning 103 Query Window 1
USE master
IF EXISTS(select name from sys.databases where name='PartitionTest')
DROP DATABASE PartitionTest
Go

CREATE DATABASE [PartitionTest]
 ON  PRIMARY 
( NAME = N'PartitionTest', FILENAME = N'Z:\SQLDATA\PartitionTest.mdf' , SIZE = 4096KB , FILEGROWTH = 1024KB ), 
FILEGROUP [FGARCHIVE] 
( NAME = N'FGARCHIVE', FILENAME = N'X:\SQLPart\FGARCHIVE.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
 FILEGROUP [FG2013] 
( NAME = N'FG2013', FILENAME = N'Z:\SQLDATA\FG2013.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
FILEGROUP [FG2014] 
( NAME = N'FG2014', FILENAME = N'Z:\SQLDATA\FG2014.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB ),
FILEGROUP [FG2015] 
( NAME = N'FG2015', FILENAME = N'Z:\SQLDATA\FG2015.ndf' , SIZE = 4096KB , FILEGROWTH = 1024KB )
 LOG ON 
( NAME = N'PartitionTest_log', FILENAME = N'Y:\SQLLOGS\PartitionTest_log.ldf' , SIZE = 1024KB , FILEGROWTH = 10%)
GO

USE PartitionTest

CREATE PARTITION FUNCTION partFn_Date (datetime)
AS RANGE RIGHT FOR VALUES ('2013-01-01','2014-01-01')
GO

CREATE PARTITION SCHEME part_Date
AS PARTITION partFn_Date
TO (FGARCHIVE,FG2013,FG2014,FG2015)
GO

Create Table Orders
(
OrderID Int Identity (10000,1),
OrderDesc Varchar(50),
OrderDate datetime,
OrderAmount money
)
ON part_Date (OrderDate)
GO

---Insert 2012 Orders
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2012-01-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2012-02-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2012-03-01',RAND()*100)

---Insert 2013 Orders
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2013-01-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2013-02-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2013-03-01',RAND()*100)

---Insert 2014 Orders
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2014-01-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2014-02-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2014-03-01',RAND()*100)

---Insert 2015 Orders
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2015-01-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2015-02-01',RAND()*100)
INSERT INTO Orders Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2015-03-01',RAND()*100)



Create Table Orders2
(
OrderID Int Identity (10000,1),
OrderDesc Varchar(50),
OrderDate datetime,
OrderAmount money
)
GO

---Insert 2012 Orders
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2012-01-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2012-02-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2012-03-01',RAND()*100)

---Insert 2013 Orders
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2013-01-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2013-02-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2013-03-01',RAND()*100)

---Insert 2014 Orders
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2014-01-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2014-02-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2014-03-01',RAND()*100)

---Insert 2015 Orders
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2015-01-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2015-02-01',RAND()*100)
INSERT INTO Orders2 Values ('Order ' + Cast(SCOPE_IDENTITY() as varchar(10)),'2015-03-01',RAND()*100)


Code for Query Window 2

--Partitioning 103 Query Window 2
Use PartitionTest

Begin Tran

Update Orders
set OrderDesc = 'Locking for Update'
Where OrderDate < '2014-01-01'

Update Orders2
set OrderDesc = 'Locking for Update'
Where OrderDate  = '2014-01-01'
Go

Code for Query Window 3

--Partitioning 103 Query Window 3
Use PartitionTest

SELECT *
FROM Orders
Where OrderDate > = '2014-01-01'
Go

SELECT *
FROM Orders
Where OrderDate < '2014-01-01'
Go

SELECT *
From Orders2
Where OrderDate > = '2014-01-01'
Go


---For DR Portion
USE MASTER
---Run the CHECKPOINT Command before dropping the drive
CHECKPOINT
DBCC DROPCLEANBUFFERS

ALTER DATABASE PARTITIONTEST SET OFFLINE
ALTER DATABASE PARTITIONTEST SET ONLINE


Advertisements
 

Tags: , , , , , , , , ,

4 responses to “[SQL Snacks Video] SQL Server Table Partitioning 103

  1. Erika

    January 2, 2015 at 2:01 AM

    Thanks Ayman,
    Awesome Snacks, I have been following them very closely and very useful tips in all.
    After seeing the ability of SQL Server of keeping the database available at least for the partitions that are on different drive than X:\. I’m wondering looking a different case where we need to move one of the group files to a different location, is there a way to achieve it, still keeping the database available at least for the data different than the one located on the group file being moved.

    Thanks in advance.

    Erika

     
    • Ayman El-Ghazali

      January 2, 2015 at 6:58 AM

      Thank you for your feedback! I’ve put together a series of SQL Snacks on Performance tuning. In part 2 where I talk about files and filegroups I have a demo of adding a new filegroup and files in it to “another drive.” So adding another filegroup and files can be done while the database is online and undisturbed.You can then move your tables to that new filegroup by rebuilding the clustered index on the new filegroup. This will affect performance as the clustered index is rebuilt and essentially all of the data is moved to that new filegroup.

      Let me also answer this question in a different way. In terms of moving the actual files of the live database, this will require an alter command. It will also require that the DB be put offline and then back online after the alter database statement is run and after the physical files have been moved. That means there will be downtime required. This is why I mentioned in my answer above that you can move the data on newly created filegroups. If you really want to do it with 0 downtime then my answer above would be the way to go. Remember, for existing data there is still the need to move the actual files from on place to another and as long as the database is live, those files are locked and not editable.

      I hope this helps clarify things.

       
  2. Mehboob

    March 19, 2016 at 7:29 PM

    Pretty good, keep rock

     
  3. kumaraguruRamar

    September 11, 2017 at 7:55 AM

    This video and code is very helpful. Great work

     

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

 
%d bloggers like this: