In a normal database, we can not create a in-memory tables. To create a in-memory table, we need to follow the below rules.
1. Create or Alter database to add a memory optimized data file group
2. Add a file into the file group.
Steps to alter your database to support in-memory tables. Before that let us try to create in-memory table in normal database.
Step 1: Alter the database to add memory optimized file group.
ALTER DATABASE [SampleDB] ADD FILEGROUP Analytics_Mode CONTAINS MEMORY_OPTIMIZED_DATA
Step 2: Add a file into the file group. Try to provide the path where SQL Server is installed.
ALTER DATABASE [SampleDB] ADD FILE (name=‘Analytics_Mode’,filename=‘E:\Local_Install_Applications\SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\InMemory1’) TO FILEGROUP Analytics_Mode
In the above script “InMemeory1” is a directory, don’t create this directory manually. The above script will create automatically and will populate the files.
Step 3: Now run the in-memory table script.
CREATE TABLE Account (
ID int NOT NULL PRIMARY KEY NONCLUSTERED,
AccountDescription nvarchar (50),
WITH (MEMORY_OPTIMIZED = ON );
We have a new feature called “Elastic Database Query” in Azure SQL Database. By using this option, we can able to perform cross database queries.
It is supporting in Vertical & Horizontal Partitioning. Cross database queries using external data source and table as same as PolyBase in SQL Server 2016.
Consider you have two different databases. These two databases are residing on two different SQL Server in Azure.
SQL Server 1 has database called SQLDB1.
SQL Server 2 has database called SQLDB2.
Inside SQLDB1 database, we have a tabled called “Orders” with few records and SQLDB2 also contain a table called “Customers”.
To access these tables, we need to create a external source and table in any one of the database.
Step by Step Explanation:
Step 1: Create two SQL Server in Azure on same region or different region.
Step 2: Create a database “SQLDB1” in Server1.
Step 3: Create a database “SQLDB2” in Server2.
Step 4: Create a table “Orders” in SQLDB1 (Server1)
CREATE TABLE [dbo].[Orders](
[OrderID] [int] NOT NULL,
[CustomerID] [int] NOT NULL
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (123, 1)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (149, 2)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (857, 2)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (321, 1)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (564, 8)
Step 5: Create a table “Customers” in SQLDB2 (Server2)
CREATE TABLE [dbo].[Customers](
[CustomerID] [int] NOT NULL,
[CustomerName] [varchar](50) NULL,
[Company] [varchar](50) NULL
CONSTRAINT [CustID] PRIMARY KEY CLUSTERED ([CustomerID] ASC)
INSERT INTO [dbo].[Customers] ([CustomerID], [CustomerName], [Company]) VALUES (1, ‘Hari’, ‘ABC’)
INSERT INTO [dbo].[Customers] ([CustomerID], [CustomerName], [Company]) VALUES (2, ‘Raj’, ‘XYZ’)
INSERT INTO [dbo].[Customers] ([CustomerID], [CustomerName], [Company]) VALUES (3, ‘John’, ‘MNO’)
Step 6: Go to Server 1 and create a New Query window and then create database master key and scoped credentials. User name and Password should be Server2 credentials.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘<password>’;
CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred
WITH IDENTITY = ‘<server2 username>’,
SECRET = ‘<password>’;
Step 7: Create a external source in same window (Server 1).
CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc WITH
(TYPE = RDBMS,
LOCATION = ‘server2.database.windows.net’,
DATABASE_NAME = ‘SQLDB2’,
CREDENTIAL = ElasticDBQueryCred,
Step 8: Following with External source, create a external table, the definition of the table should be same as customers table in SQLDB2 (Server 2). The table name also should be same.
CREATE EXTERNAL TABLE [dbo].[Customers]
( [CustomerID] [int] NOT NULL,
[CustomerName] [varchar](50) NOT NULL,
[Company] [varchar](50) NOT NULL)
( DATA_SOURCE = MyElasticDBQueryDataSrc)
Step 9: Access the tables using below script from Server 1.SQDB1.
SELECT Orders.CustomerID, Orders.OrderId, Customers.CustomerName, Customers.Company
INNER JOIN Customers
ON Customers.CustomerID = Orders.CustomerID
As we know that, the current version of stretch DB is migrating all the rows from local to Azure database table when there is no filter predicate is applied (in GUI, there is no option for filter predicate, it is possible via T-SQL script). At one point of time, there is no records in local table.
In this situation, we can not get the correct performance while executing the query. I had a chance to see the query execution plan and did some analysis.
I have executed the script when stretch db is not enabled. I took a simple table without any index on it.
Then I enabled the stretch db and included the above table and got the below query plan. At this time, almost all the records were migrated from local to azure.
This was not the execution plan when I see immediately once enabled the stretch. At that time, local table has more records compare than azure.
In latest version of SQL Server 2016 CTP 3.3, we have issues in Stretch option. I have listed out the issues, hope this will be sorted out before GA (General Available).
Issue 1: No option to move only set of rows to Azure Database. The default option is “Migrate All Rows”. Using inline function, we can apply the condition to split the records but available only through T-SQL Scripts.
Issue 2: Stretch option is not enabled for table even though I enabled the stretch on database with selected tables.
Issue 3: I have enabled the stretch for the database, there is no option to disable in GUI. We have to disable through T-SQL Script
Issue 4: Almost all the time, all the data got migrated to Azure database, in local only 0 record.
Follow my previous article to integrate R in SQL Server 2016, Once done then check the R script in SQL Server to make sure integration is done successfully.
Run the below script in SQL Server
CREATE TABLE Sample ([Value] int not null) ON [PRIMARY]
INSERT INTO Sample Values (101);
INSERT INTO Sample Values (102);
INSERT INTO Sample Values (103) ;
@language = N’R’
, @script = N’ OutputDataSet <- InputDataSet;’
, @input_data_1 = N’ SELECT * FROM Sample;’
WITH RESULT SETS (([Value_R] int NOT NULL));
When I run the script, I got below error.
“Unable to communicate with the runtime for ‘R’ script. Please check the requirements of ‘R’ runtime. STDERR message(s) from external script: Fatal error: cannot create ‘R_TempDir’”
To avoid this type of error, check the working directory on Rlauncher.config file
In my case, I have installed in E drive “E:\Local_Install_Applications\SQL Server\MSSQL13.MSSQLSERVER\MSSQL\Binn”
The blue highlighted path will be different if you have installed SQL Server on a different path.
Open the Rlauncher.config file,
make sure your working directory is correct without any extra space and also check the permission for “ExtensibilityData” directory.
Provide similar permission level as above.
Now try to run the same script in SSMS,
It is very easy to integrate R services with SQL Server 2016.
First, make sure you have selected the “Advanced Analytics Extensions” while installing SQL Server.
Next, download and install below two R components
1. Revolution R Enterprise 7.5
2. Revolution R Open 3.2.2 for Revolution R Enterprise 7.5
Once the above components are installed then enable the external scripts in SQL Server Management Studio.
exec sp_configure 'external scripts enabled', 1;
The next step is run the post installation script in command prompt, Locate the registerRext.exe file in your system.
Mostly the file will be reside in following location, if you have provided the default installation path, “C:\Program Files\RRO\RRO-3.2.2-for-RRE-7.5.0\R-3.2.2\library\RevoScaleR\rxLibs\x64”
Open a command prompt with admin privilege and go to above directory,
Step 1: Type below command in command prompt
cd C:\Program Files\RRO\RRO-3.2.2-for-RRE-7.5.0\R-3.2.2\library\RevoScaleR\rxLibs\x64
registerRext.exe /install (If your SQL Server is default instance, if your SQL Server is named instance then use the below code)
Step 3 (Optional):
registerRext.exe” /install /instance<SQLNamedInstance>
Creating a SQL Data Warehouse is very easy in Azure. It is the industry first cloud data warehouse with full fledged SQL capabilities and ability to grow, shrink pause in seconds.
Follow the below steps to create data warehouse
1. Open portal.azure.com
2. Choose New –> Date_Storage –> SQL Data Warehouse
3. Define a name. You can create a new SQL Server or select existing one. Choose the DWU (Data Warehouse Unit) based on your requirement .
4. You can create DW with blank database or with Sample.
5. Click “Create”, Azure will create and deploy DW for you.
I have talked about how to import the database on premises SQL Server using .bacpac file which was taken from Azure export (find here).
This is article will speak about, how to migrate the database from on-premises SQL Server to Azure. We to need to perform two steps here.
1. Export the database and store it in Azure Storage Container
2. Import in Azure SQL Server
Exporting the database using data tier application since we need to get .bacpac file format.
Steps to Export and store in Azure Storage,
1. Choose your database which you want to migrate. Right Click on database and choose Tasks –> Export Data-tier Application.
2. Provide the Azure storage account details and select the container
3. Check the Azure storage for the .bacpac file.
Once exported the database, your next step is to import into Azure SQL Server
1. Choose Your SQL Server in Azure and Import database
2. Map your .bacpac file from your storage container.
Once above steps are completed then your database will be deployed on Azure