I have got a below error when I try to use one of the R package “ggplot2”.
It says that there is no package called ggplot2 even though I installed the ggplot2 package in R.
Message from R when I install ggplot2 package,
The above message clearly explaining that the package is not installed on defined path, it has been downloaded on temporary directory. It means there was no proper installation happened in R. This was the reason for error in SQL Server.
Follow the below steps to sort out the issue,
Use “.libPaths” to define the library path before install a package in R.
> .libPaths (.libPaths()[2:3])
Package will be installed like below,
Now run the same SQL Script with R, this time it will be executed without an issue.
Following today’s Data Driven live streaming from New York, Satya Nadella have opened the speech by providing insights on what CEOs and organization owners should see with Data being the key factor driving the business.
Also SQL Server 2016 RC (Release Candidate) is now available here and no more CTPs from Microsoft download site. Be the first to download the latest Release Candidate and try the brand new features of SQL Server.
Go through my previous article to enable the stretch for your database and table. This article is about how to monitor the stretch. You can monitor the stretch through GUI or DMVs.
We have below DMVs available to monitor the status of the stretch.
select * from sys.remote_data_archive_databases
The above script will provide the details about what is the database name in Azure used for stretch.
select * from sys.remote_data_archive_tables
This will tell you, what is the table name in azure used for stretch.
The above result has column called “filter_predicate”. It means, if apply any predicate function while configure the stretch then you can see that predicate function here. unfortunately, we don’t have option to apply the predicate function in GUI method. It is available when we enable stretch through T-SQL Script.
select * from sys.dm_db_rda_migration_status
This script will tell you about data migration. Data will start migrate from the table once stretch is enabled. Migration will start from on-premises SQL Server to Azure. It is using batch process so it will migrate maximum of 4999 records in a batch. The batch id will be available in Azure table.
To see the live migration, we can use “SP_SPACEUSED”. By seeing the rows and size we can identify the data migration.
You can monitor the stretch using GUI,
We can enable the stretch feature either through GUI or T-SQL Script. This article will talk about GUI method.
To enable the stretch for your database,
1. You should have Azure subscription
2. If you are using existing SQL Server in Azure then firewall should be enabled.
Steps to enable the stretch,
Step 1: Choose your database which you need to enable stretch and right click –> Tasks –>Stretch –>Enable
Step 2: Create a database credentials, you need to provide the master key. you should use this key whenever you disable or enable the stretch for this database.
Step 3: Select the table which you want to do stretch, you can select multiple tables. This screen will give the information about your table and records in that table and also it will tell you that whether stretch is already enabled or not (Stretched = False).
Step 4: SQL Server settings will be validated here.
Step 5: You need to sign in using your credentials where azure subscription is active. Once sign in, then you need to select Azure SQL Server. You can create a new server or select from existing server.
Step 6: Provide your Azure SQL Server credentials.
Step 7: Finally , SQL Server will configure stretch for your database and table. To verify, check your database icon.
If you get an error while configuring stretch like below, then check the log file. Mostly you will get an error when you are not configured the firewall in Azure SQL Server.
In a normal database, we can not create a in-memory tables. To create a in-memory table, we need to follow the below rules.
1. Create or Alter database to add a memory optimized data file group
2. Add a file into the file group.
Steps to alter your database to support in-memory tables. Before that let us try to create in-memory table in normal database.
Step 1: Alter the database to add memory optimized file group.
ALTER DATABASE [SampleDB] ADD FILEGROUP Analytics_Mode CONTAINS MEMORY_OPTIMIZED_DATA
Step 2: Add a file into the file group. Try to provide the path where SQL Server is installed.
ALTER DATABASE [SampleDB] ADD FILE (name=‘Analytics_Mode’,filename=‘E:\Local_Install_Applications\SQL Server\MSSQL13.MSSQLSERVER\MSSQL\DATA\InMemory1’) TO FILEGROUP Analytics_Mode
In the above script “InMemeory1” is a directory, don’t create this directory manually. The above script will create automatically and will populate the files.
Step 3: Now run the in-memory table script.
CREATE TABLE Account (
ID int NOT NULL PRIMARY KEY NONCLUSTERED,
AccountDescription nvarchar (50),
WITH (MEMORY_OPTIMIZED = ON );
We have a new feature called “Elastic Database Query” in Azure SQL Database. By using this option, we can able to perform cross database queries.
It is supporting in Vertical & Horizontal Partitioning. Cross database queries using external data source and table as same as PolyBase in SQL Server 2016.
Consider you have two different databases. These two databases are residing on two different SQL Server in Azure.
SQL Server 1 has database called SQLDB1.
SQL Server 2 has database called SQLDB2.
Inside SQLDB1 database, we have a tabled called “Orders” with few records and SQLDB2 also contain a table called “Customers”.
To access these tables, we need to create a external source and table in any one of the database.
Step by Step Explanation:
Step 1: Create two SQL Server in Azure on same region or different region.
Step 2: Create a database “SQLDB1” in Server1.
Step 3: Create a database “SQLDB2” in Server2.
Step 4: Create a table “Orders” in SQLDB1 (Server1)
CREATE TABLE [dbo].[Orders](
[OrderID] [int] NOT NULL,
[CustomerID] [int] NOT NULL
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (123, 1)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (149, 2)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (857, 2)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (321, 1)
INSERT INTO [dbo].[Orders] ([OrderID], [CustomerID]) VALUES (564, 8)
Step 5: Create a table “Customers” in SQLDB2 (Server2)
CREATE TABLE [dbo].[Customers](
[CustomerID] [int] NOT NULL,
[CustomerName] [varchar](50) NULL,
[Company] [varchar](50) NULL
CONSTRAINT [CustID] PRIMARY KEY CLUSTERED ([CustomerID] ASC)
INSERT INTO [dbo].[Customers] ([CustomerID], [CustomerName], [Company]) VALUES (1, ‘Hari’, ‘ABC’)
INSERT INTO [dbo].[Customers] ([CustomerID], [CustomerName], [Company]) VALUES (2, ‘Raj’, ‘XYZ’)
INSERT INTO [dbo].[Customers] ([CustomerID], [CustomerName], [Company]) VALUES (3, ‘John’, ‘MNO’)
Step 6: Go to Server 1 and create a New Query window and then create database master key and scoped credentials. User name and Password should be Server2 credentials.
CREATE MASTER KEY ENCRYPTION BY PASSWORD = ‘<password>’;
CREATE DATABASE SCOPED CREDENTIAL ElasticDBQueryCred
WITH IDENTITY = ‘<server2 username>’,
SECRET = ‘<password>’;
Step 7: Create a external source in same window (Server 1).
CREATE EXTERNAL DATA SOURCE MyElasticDBQueryDataSrc WITH
(TYPE = RDBMS,
LOCATION = ‘server2.database.windows.net’,
DATABASE_NAME = ‘SQLDB2’,
CREDENTIAL = ElasticDBQueryCred,
Step 8: Following with External source, create a external table, the definition of the table should be same as customers table in SQLDB2 (Server 2). The table name also should be same.
CREATE EXTERNAL TABLE [dbo].[Customers]
( [CustomerID] [int] NOT NULL,
[CustomerName] [varchar](50) NOT NULL,
[Company] [varchar](50) NOT NULL)
( DATA_SOURCE = MyElasticDBQueryDataSrc)
Step 9: Access the tables using below script from Server 1.SQDB1.
SELECT Orders.CustomerID, Orders.OrderId, Customers.CustomerName, Customers.Company
INNER JOIN Customers
ON Customers.CustomerID = Orders.CustomerID
As we know that, the current version of stretch DB is migrating all the rows from local to Azure database table when there is no filter predicate is applied (in GUI, there is no option for filter predicate, it is possible via T-SQL script). At one point of time, there is no records in local table.
In this situation, we can not get the correct performance while executing the query. I had a chance to see the query execution plan and did some analysis.
I have executed the script when stretch db is not enabled. I took a simple table without any index on it.
Then I enabled the stretch db and included the above table and got the below query plan. At this time, almost all the records were migrated from local to azure.
This was not the execution plan when I see immediately once enabled the stretch. At that time, local table has more records compare than azure.