Sunday, 16 July 2017

SQL Backup To AWS S3


This blog I am going to help you on moving your SQL backups to AWS S3.
Below are the steps we are going through in this blog.
  1. Creating S3 bucket in your Aws account.
  2. Creating IAM accounts in AWS security.
  3. Installing AWS CLI in your database server.
  4. Moving backups from On-Premise to S3 using PowerShell.
  5. Moving backups from AWS EC2 to S3 using PowerShell.

First thing you need is an AWS account.



Creating S3 bucket in your Aws account
  • ·         Login to AWS console.
  • ·         Click on storage service S3.


  • ·         Click on Create bucket.
  • ·         Fill bucket name, region where you want to place this storage and click next. 

  • ·         Set properties of the bucket. Enable versioning property and click on next.


  • ·         Give read and write permissions to the user or owner of the bucket and click next.

  • ·         Review the bucket details and click on create bucket.


  • ·         New S3 bucket has been created.

   

     Creating IAM accounts in AWS security
  • ·         Click on IAM service in security.
  • ·         Select Users on left pane of IAM dashboard.

  • ·         Click add user.

  • ·         Provide username, enable programmatic access and click on next.

  • ·         Add user to Admin group and click on next.

  • ·         Review user details and click on create user.
  • ·         This will create user and provide Access Key ID and Secret Access Key. Make sure you make a note of them for future use on AWS CLI access.


    Installing AWS CLI in your database server

       Download and install AWS CLI on database server where backup files are located.



    Moving backups from On-Premise to S3 using PowerShell


   Step 1:
   Configure AWS CLI in database backup server. 
   Open command prompt and configure AWS CLI as below


   Provide Access Key ID and Secret Access Key which you saved while creating IAM user.

  Step 2:
  Save the below PowerShell script at C:/Scripts/S3_copy.ps1

  Provide bucket name and Backup location in the script.

import-Module -name AWSPowerShell

$BucketName = "aws-s3-sql-backup"
$s3Directory = " C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\Backup"
$concurrentLimit = 50
$inProgressFiles = @()

foreach ($i in Get-ChildItem $s3Directory)
{
  # Write the file to S3 and add the filename to a collection.
#  Write-S3Object -BucketName $BucketName -Key $i.Name -File $i.FullName
if (!(Get-S3Object -BucketName $BucketName -Key $i.Name)) {
    Write-S3Object -BucketName $BucketName -Key $i.Name -File $i.FullName
}
  $inProgressFiles += $i.Name

  # Wait to continue iterating through files if there are too many concurrent uploads
  while($inProgressFiles.Count -gt $concurrentLimit)
  {
    Write-Host "Before: "$($inProgressFiles.Count)

    # Reassign the array by excluding files that have completed the upload to S3.
    $inProgressFiles = @($inProgressFiles | ? { @(get-s3object -BucketName $BucketName -Key $_).Count -eq 0 })

    Write-Host "After: "$($inProgressFiles.Count)

    Start-Sleep -s 1
  }

  Start-Sleep -s 1

  
  Step 3:
  •      Open your SSMS and create a backup job with below two steps.
  •      Job step 1 - Takes backups of user DB’s.
  •      Job step 2 - Copy recent backups from database server backup location to AWS S3 bucket.



   Properties of Step 2:

   This will kick the PowerShell script placed in C drive.

    Please check your S3 bucket once job gets completed.

   Note:  Before running the PowerShell script make sure you are remotely signed using below execution policy in your database server.

     
     Moving backups from AWS EC2 to S3

  You can use the same script in AWS EC2 machine of SQL server and copy backup files to S3.




Tuesday, 7 February 2017

Enable XP_CMDSHELL [SQLSTATE 42000] (Error 15281)


Below error clearly states that stored procedure tries to execute 'xp_cmdshell'. 

By default 'xp_cmdshell' is in disabled state. Enabling 'xp_cmdshell' resolved this error.


Executed as user: Domain\Login_User. SQL Server blocked access to procedure 'sys.xp_cmdshell' of component 'xp_cmdshell' because this component is turned off as part of the security configuration for this server. A system administrator can enable the use of 'xp_cmdshell' by using sp_configure. For more information about enabling 'xp_cmdshell', search for 'xp_cmdshell' in SQL Server Books Online. [SQLSTATE 42000] (Error 15281).  The step failed.



Enable ‘xp_cmdshell’ in T-SQL

-          Show advanced options
EXEC sp_configure 'show advanced options', 1;
GO
RECONFIGURE
GO

-          Enable xp_cmdshell
EXEC sp_configure 'xp_cmdshell', 1
GO
RECONFIGURE
GO


Enable ‘xp_cmdshell’ in SSMS

Right click on instance and select ‘Facets’.



Select ‘Surface Area Configuration’ next to Facet drop down.


Change the Facet Property ‘XPCmdShellEnabled’ to True. Click Ok.




Tuesday, 31 January 2017

Stretch Database SQL Server 2016


Introduction to Stretch Database:

What?

Stretch Database is one of the highly anticipated new feature in Microsoft SQL Server 2016.

This feature archives your historical data (Cold data) to Azure SQL database (PaaS Cloud) securely.Once you enable this feature, SQL Server silently and transparently moves/migrates data to Azure SQL Database.

Why?

Evergrowing databases with millions & Billions of rows.

Cost reduction

small latency
-------------------------------------------------------

Terminologies:

Cold Data/Historical data (> 10 years)
Warm Data (> 5 years)
Hot data/live data/on premise (< 5 years)
-----------------------------------------------------

How does StretchDB work?

Once you enable this feature for a table, SQL Server silently and transparently moves/migrates data to Azure SQL Database. When you query those tables, the query optimizer knows which rows are on your server and which rows are in Azure, and divides the workload accordingly. The query processing on the Azure rows takes place in Azure so the only latency is for the return of the rows over the network.

--------------------------------------------------------------------------------

Limitations for Stretch Database

  • Not all tables will be eligible for being Stretch-enabled. Certain table properties, data and column types, constraints, and indexes are not supported.
  • Memory-optimized and replicated tables.
  • Tables that contain FILESTREAM data, use Change Tracking or Change Data.
  • Data types such as timestamp, sql_variant, XML, geography or columns that are Always Encrypted.
  • Check and default constraints or foreign key constraints that reference the table.
  • You cannot run UPDATE or DELETE statements, or run CREATE INDEX or ALTER INDEX operations on a Stretch-enabled table.
-------------------------------------------------------------------------------
Backup and Restore of a Stretch Database
Currently in SQL Server 2016 when a database that is Stretch-enabled is backed up, a shallow backup is created which does not include the data that has been migrated to the Azure SQL database. It is expected that with the RTM release of SQL Server 2016 backing up a Stretch-enabled database will create a deep backup that will contain both local and stretched data. When restoring a database that is Stretch-enabled, you'll have to reconnect the local database to the remote Azure SQL Database. You do this by running the stored procedure sys.sp_reauthorize_remote_data_archive as a db_owner. If I now back up the Stretch-enabled AdventureWorks2014 database and restore it, I will no longer be able to query the StretchTest table until I reconnect to Azure SQL Database by running:
USE [AdventureWorks2014];
GO
EXEC sys.sp_reauthorize_remote_data_archive @azure_username,
@azure_password;
GO
----------------------------------------------------------------------------------
Un-migrate Data To migrate data back to local storage from an Azure SQL Database you will need to create a new local table with the same schema as the Stretch-enabled table. You then have to copy the data from the Stretch-enabled table into the new local table. Once the data is copied you drop the Stretch-enabled table and rename the new local table to the name of the Stretch-enabled table that was just dropped. You can only disable Stretch for a database once all Stretch-enabled tables have been dropped. If you drop a database enabled for Stretch, the local database is removed but the remote data is not; you will have to drop the remote database from the Azure management portal. --------------------------------------------------------------------------------- Some Queries used on StretchDB To enable Stretch Database on an instance run:
EXEC sys.sp_configure N'remote data archive', '1';
RECONFIGURE;
GO
-------------------
To monitor the migration of data to Azure you can query
sys.dm_db_rda_migration_status
---------------------

When restoring a database that is Stretch-enabled, you'll have to reconnect the local database to the remote Azure SQL Database. You do this by running the stored procedure 

USE [AdventureWorks2014];
GO
EXEC sys.sp_reauthorize_remote_data_archive @azure_username,
@azure_password;
GO
--------------------------
Check the status of data migration in a dynamic management view
sys.dm_db_rda_migration_status to see how many batches and rows of data have been migrated.
--------------------------- Tools StretchDB Migration advisor Stretch Database monitor.




Monday, 30 January 2017

SQL Server In-Memory OLTP 2016



In-memory OLTP is a new feature introduced on SQL Server 2014.

Why In-Memory?

This feature is used to improve OLTP database application performance. Memory-optimized-tables are fully transactional, durable, and are accessed using Transact-SQL in the same way as disk-based tables.

  • ·         A query can reference both memory-optimized tables and disk-based tables.
  • ·         A transaction can update data in memory-optimized tables and disk-based tables.
  • ·         Stored procedures that only reference memory-optimized tables can be natively compiled into machine code for further performance improvements.


Benefits of this feature:

  • ·         In-Memory OLTP will improve performance best in OLTP with short-running transactions.
  • ·         Optimistic concurrency control that eliminates logical locks.
  • ·         Lock free objects that eliminate all physical locks and latches. Threads that perform transactional work don’t use locks or latches for concurrency control.
  • ·         Natively compiled stored procedures, which have significantly better performance than interpreted stored procedures, when accessing a memory-optimized table.

Deep dive into SQL Server in-memory OLTP

It mainly features two new data structures which are Memory-Optimized Tables, and Natively-Compiled Stored Procedures.

Memory-Optimized Tables

Memory-optimized tables are fully durable by default, and, like transactions on (traditional) disk-based tables, fully durable transactions on memory-optimized tables are fully atomic, consistent, isolated, and durable (ACID). Memory-optimized tables and natively compiled stored procedures support a subset of Transact-SQL.

Natively-compiled stored procedures
A natively-compiled stored procedure is a SQL Server object that can access only memory-optimized data structures such as memory-optimized tables, table variables, etc. 

Here comes SQL server 2016 In-Memory OLTP

SQL Server 2016 offers significant improvements in In-Memory feature.

Some of the important features with enhancements

In-Memory Performance

SQL Server 2016 in-memory technology provides improved query performance. For example, the same transactions that run against SQL Server 2014 memory-optimized tables will run 30 times faster when run against SQL Server 2016 memory-optimized tables.

Supported size for In-Memory Tables

In SQL Server 2014, size of memory optimized tables is maximum of 256 GB of memory.
In SQL Server 2016 the recommended maximum is 2 TB of memory for memory-optimized tables.

Schema and data changes

SQL Server 2014 in-memory OLTP does not allow you to modify memory-optimized tables once they are created. If you have to make changes to a memory-optimized table schema or rebuild its indexes, you must drop and re-create the memory-optimized table, using DROP TABLE and CREATE TABLE statements. 
From SQL Server 2016 in-memory OLTP, this is no longer a constraint; you can make changes to memory-optimized tables after creation, using the ALTER TABLE statement.

Transparent Data Encryption (TDE)

In SQL Server 2014, the TDE feature is not compatible with memory-optimized tables. That means data stored for memory-optimized tables on a MEMORY_OPTIMIZED_DATA file-group are not encrypted on disk

TDE function is fully compatible with SQL Server 2016 memory-optimized tables
  
AlwaysON

In SQL Server 2014, the visibility of data in memory-optimized tables on secondary replicas is delayed by few transactions.
SQL Server 2016 memory-optimized tables; like disk-based tables, all data is visible immediately to the user.
Below are the complete feature enhancements from 2014 to 2016
Feature/Limit
SQL Server 2014
SQL Server 2016
Maximum combined size of durable tables
256 GB
2 TB
LOB (varbinary(max), [n]varchar(max))
Not supported
Supported*
Transparent Data Encryption (TDE)
Not supported
Supported
Offline Checkpoint Threads
1
1 per container
ALTER PROCEDURE / sp_recompile
Not supported
Supported (fully online)
Nested native procedure calls
Not supported
Supported
Natively-compiled scalar UDFs
Not supported
Supported
ALTER TABLE
Not supported
(DROP / re-CREATE)
Partially supported
(offline – details below)
DML triggers
Not supported
Partially supported
(AFTER, natively compiled)
Indexes on NULLable columns
Not supported
Supported
Non-BIN2 collations in index key columns
Not supported
Supported
Non-Latin codepages for [var]char columns
Not supported
Supported
Non-BIN2 comparison / sorting in native modules
Not supported
Supported
Foreign Keys
Not supported
Supported
Check/Unique Constraints
Not supported
Supported
Parallelism
Not supported
Supported
OUTER JOIN, OR, NOT, UNION [ALL], DISTINCT, EXISTS, IN
Not supported
Supported
Multiple Active Result Sets (MARS)
(Means better Entity Framework support.)
Not supported
Supported
SSMS Table Designer
Not supported
Supported

Creating Memory optimized table

CREATE TABLE [Employee](
[EmployeeID] INT NOT NULL PRIMARY KEY NONCLUSTERED HASH WITH (BUCKET_COUNT = 1000000),
[Name] NVARCHAR(250) NOT NULL INDEX [IName] HASH WITH (BUCKET_COUNT = 1000000),
[EmployeeSince] DATETIME NULL
)
WITH (MEMORY_OPTIMIZED = ON, DURABILITY = SCHEMA_AND_DATA);

Below are keywords used in above query.
MEMORY_OPTIMIZED = ON
This clause triggers the database engine to consider this table as memory optimized table

NONCLUSTERED HASH
Memory optimized tables can have up to eight non-clustered indexes, they don’t support clustered indexes. These indexes don't duplicate data, but rather just point to the rows in the chain. In-memory OLTP maintains these indexes online and they get created each time during database recovery.

Only two types of Indexes can be created on In-Memory tables, i.e. Non Clustered Hash Index and Non Clustered Index (aka Range Index).

Hash index is an array of N buckets or slots, each one containing a pointer to a row.
Non Clustered Indexes are inherently Covering, and all columns are automatically INCLUDED in the Index.

BUCKET_COUNT
With index specification, you also need to specify the BUCKET COUNT.  The recommended value for this should be two times the expected number of unique values of that column.
The value of BUCKET_COUNT represents the number of buckets a Hash index has. We can think of a bucket as an index position in which a row address is stored.

DURABILITY = SCHEMA_AND_DATA
With memory optimized tables you can either specify DURABILITY = SCHEMA_AND_DATA or DURABILITY = SCHEMA_ONLY. 

DURABILITY = SCHEMA_AND_DATA is a default option, with this option changes to table data are logged in the SQL Server transaction log and data gets stored in a file-stream based file-group. So that data can be recovered after an unexpected crash or a server restart.

DURABILITY = SCHEMA_ONLY with this option changes to table data are not logged, only structure or schema of table is saved. After a server restart the table will be available, but it will be an empty table.
---------------------------------------------------------------------------------------------
In-Memory OLTP in Azure

In-Memory OLTP can be used to improve the performance of OLTP workload in Premium Azure SQL Databases without increasing the performance level. With In-Memory OLTP in SQL Server, customers achieve up to 30x transactional performance gains.
A memory-optimized table has one representation of itself in active memory, in addition to the standard representation on a hard drive. Business transactions against the table run faster because they directly interact with only the representation that is in active memory.
Premium databases created in November 2015 or later do support the In-Memory feature.
Check the table compatibility for In-Memory using Memory Optimization Advisor.

Bringing this technology to the cloud means customers will be able to take advantage of in-memory OLTP in a fully managed database-as-a-service with 99.99% SLA.