Search This Blog

Showing posts with label SQL Server 2014 New Features. Show all posts
Showing posts with label SQL Server 2014 New Features. Show all posts

SQL Server 2014 CTP 1 – Installation failure

Recently I was trying to install SQL Server 2014 CTP 1 on my Windows Server 2008 R2 Virtual Machine to explore SQL Server 2014.

What I encountered was a failure of a rule “Previous SQL Product installation”
————————————
Rule Check Result
————————————-
Rule “Previous SQL product installation” failed.
A SQL product other than SQL Server 2014 CTP1 is detected. You cannot install this release until the existing instances of SQL products are uninstalled.
Cause
Microsoft SQL Server 2014 CTP1 is does not support upgrade from or side-by-side installations with any previous version of SQL Server, including SQL Server 2012.
I found that SQL Express was installed on the machine by a Third Party Software. So if you want to install SQL Server 2014 CTP 1, you need to either uninstall previous SQL versions, or alternatively, you can install it on fresh machine.

SQL Server 2014 CTP 2 – Installation

SQL Server 2014 CTP 2 was already announced, which is available now for download as a community technology preview.

You can now download SQL Server 2014 CTP2 from http://technet.microsoft.com/en-US/evalcenter/dn205290.aspx. This post covers a step-by-step installation guide of SQL Server 2014 CTP 2 with screen shots. Future release or final release may not be exactly same but should be very similar to this. We will perform this installation on Virtual test machine with Windows 2008 R2 Enterprise Edition X64.

Upgrading from Microsoft SQL Server 2014 CTP 1 to Microsoft SQL Server 2014 CTP 2 is NOT supported. So make sure that you remove Microsoft SQL Server 2014 CTP 1 before proceeding for installation.
Installation Steps
Step 1: Select the installation type. Here we will perform standalone installation thus selecting the same.

Step 2:  We will keep the edition details default and Click Next to continue.
Step 3: Read and accept term and conditions. Click next to continue.
Step 4: Setup will check Global Rules on this page. Product update page was skipped.
Step 5:  Setup should pass Install rules. You may ignore warnings and go ahead based on case to case basis. Click next to continue.
Step 6: We will proceed with Feature install selection once the check phase is completed. I have selected default. Click next to continue.

Step 7: Select the Features that you would like to install.

Step 8:  Based on the selected features the SQL Server 2014 CTP 2 setup will perform additional Feature Rules check. Click next to continue.
Step 9: Provide the instance name if you want to install a named instance here. I have selected default instance here. Click next to continue.
Step 10: Specify account name and startup type on this page. Move to adjacent tab.
Step 11: There is default checking of Collation settings for Database Engine & Analysis Services. You can customize it. Click next to continue.
Step 12: On the Server Configuration tab, specify the Authentication type and add the users who will have Administrative Privileges on Database Engine. Here I have added my account as Administrator. Switch to adjacent tab.
Step 13: Select the locations in Data Dictionaries tab. I have kept them as default. Switch to last tab.
Step 14: On File stream tab you might want to enable the filestream feature. For now, I have kept it as disabled. Click next to continue.
Step 15: On the Server Configuration tab, add the users who will have Administrative Privileges on Analysis Services. Here I have added my account as Administrator. Switch to adjacent tab.
Step 16: Select the locations in Data Dictionaries tab. I have kept them as the default. Click next to continue.
Step 17: On Reporting Services Configuration page, select the option for Reporting services install. I have selected ‘Install Only’ as I will configure it later .Click next to continue.
Step 18: Some more checks on Feature Configuration Rules page. Click next to continue.
Step 19: We are ready to install. You might like to verify the selected configurations before proceeding. Click on Install button.
Step 20: Installation is in progress.
Step 21: Finally the setup is successful with the pop up window suggesting to reboot the system after the successful installation of SQL Server 2014 CTP 2.
Now we have SQL Server 2014 CTP 2 installed. Let’s connect to SSMS and verify the build.
Conclusion
SQL Server 2014 CTP 2 wizard is almost similar to SQL Server 2012. Soon we will explore SQL Server 2014 features in upcoming blogs.

SQL Server 2014 CTP 2 : Native Backup Encryption

Here we will discuss the native backup encryption new feature introduced in SQL Server 2014 CTP2. The native backup encryption feature will create an encrypted backup file for SQL Server database. This is a feature that third party vendors have provided in the past, but we now have the feature for encrypted backups integrated with the Product.
How it works


A certificate or asymmetric key is used with encryption algorithm to achieve the goal. We have multiple options of algorithm with this feature. The supported algorithms are AES 128, AES 192, AES 256, and Triple DES.

Native backup encryption in action

Step 1: We can accomplish this by using certificate or asymmetric key so we will use certificate here.


Step 2: Create a certificate in master database.


Step 3:  We are all set to create an encrypted backup file by specifying the certificate that we create in step 2 and a backup algorithm. You can read more about the algorithm supported and choose the one which is best for you.


You will receive a warning as shown above  if you do not back up the certificate. It is very important that you back up the certificate to a safer location. So let’s Backup the certificate.

This is it — you have your encrypted backup with you now.
Native backup encryption using GUI
Step 1: We have two options to use as destination for backup of which you can choose to back up to a URL or to disk. If you choose to back up to a URL you can supply URL of your storage, Credentials and azure storage container. Below is the screenshot for the backup to URL option.I will be choosing disk option for now.


Step 2: We will move to Backup Options Page and can see that the Encryption option is greyed out here as I selected the DISK destination for backups. This could change in the final release of SQL Server 2014. But we were able to encrypt the backup created to disk using T-SQL in previous section of this article.
Went back and selected the URL options and now can see the options for encrypting backups. We can choose one of four encryption algorithm as of now.

Benefits of native backup encryption
  • This is going to benefit organizations which are using TDE just for the sake of encrypting data in their backup files, which actually consumes a lot of resources on production database. Also TDE is available only in Enterprise Edition but Native backup encryption is available for all editions, except SQL Server Express and SQL Server Web.
  • If you are currently using a 3rd party tool for encrypted backups, you can compare that tool to the functionality and performance of native encrypted backups and see if this fulfills your requirement.

Limitations
  • Appending to an existing backup set option is not supported for encrypted backups.
  • SQL Server Express and SQL Server Web do not support encryption during backup.

Conclusion
This is one of the most awaited feature which is integrated with product with no extra cost or additional installation.
I will compare the impact of different algorithms on backup performance and disk space utilization in an upcoming blog post.

SQL Server 2014 CTP2 – Memory Optimization Advisor

Here I am going to discuss one of the amazing new features, Memory Optimization Advisor of SQL Server 2014 CTP2. I discussed architectural details of SQL Server 2014 In-Memory Optimizer in my last blog post here.



We will discuss Memory Optimization Advisor implementation with a demo in this blog post. This tool helps customers in migrating disk-based tables to memory optimized tables with ease.
Memory Optimization Advisor – In Action
I will use AdventureWorks2008 database .For the demo purpose I have created a copy of table Employee named EmpTempTbl with columns – OrganizationNode, SalariedFlag, CurrentFlag removed.
Let us explore the wizard step by step.
Step # 1: Open Management Studio, connect to instance and the database AdventureWorks2008 and Right click the table EmpTempTbl . Now choose Memory Optimization Advisor.
Step # 2: The Advisor tool will be launched with a page describing feature of the tool. Click Next.
Step #3: On this screen the tool will check if the selected table is fit for migration as a Memory Optimized Table. The tool will report immediately if anything is wrong with the table which is preventing it from migrating as a Memory Optimized Table.
We have the option of generating Report using Generate Report button.
If you click on Generate button then wizard will provide with an option to save the report (HTML File) anywhere on the local disk. In my case, all is green so Click Next
Step #4: This screen shows migration warnings about the limitations of memory optimized object, and a link which will explain the limitations in detail. Click Next.
Step #5: Wizard will take us to below screen  which let us select the options for memory optimization.
  • Memory-Optimized Filegroup :  you can have just one per instance and must create one before moving disk based table to memory optimize table else you will get error.
  • Logical file name Here you can change the logical file name.
  • Path: Points to the location where you will save your logical file.
  • Estimated Current Memory Cost (MB): The estimated current memory cost for this table.
We have two checkboxes on this page and are described as below
  • Option to copy Data from the disk table to the new memory optimized table during the migration process
  • Option to be able to change the durability of the table just Schema (schema_only), in this case data will be lost after each SQL Server service restart. However by default Schema and Data(schema_and_data) is applied.
Step #6: Next screen in the wizard allow you to decide the name of the primary key, its members, its type. You can choose between nonclustered Hash index and nonclustered index here.
I have selected one integer column and one character column for Primary Key.
I selected Char Data Type to show here that we don’t have any other option then BIN2 collation for memory optimized tables. Click Next.
Step #7: This screen of the wizard will list out the summary Migration Actions. You have the option to script those operations by clicking the Script button .I scripted it out. Click Migrate.
Step #8: The migration process can take longer as it depends on number of objects. In this case, it succeeded. Click OK
Now our table is In-Memory. Let’s check the properties of the table to verify it. On SSMS you would be able to see the old table which is now renamed as EmpTempTbl_old and the new table is created under the Tables folder of the database.
Right Click on the newly created table and Go to Properties. You can see that the option Memory Optimized is set to true and the Durability is set to SchemaAndData.
This is a very user friendly tool with explanations and warnings, which will help users to streamline issues well before implementing In-Memory technology. As this blog is written with CTP version of SQL Server 2014 thus things might change during future release of SQL Server 2014.

SQL Server 2014 In-Memory Optimizer In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP2

Today I am going to discuss the new SQL Server 2014 feature In-memory OLTP a.k.a. Hekaton — a Greek word which means hundred or hundredfold. In-memory OLTP significantly reduces the time required for OLTP workload processing. The best part of this feature is that this is installed with the SQL Server 2014 Engine without requiring any additional installations. It also allows its benefits without rewriting your database application.
High Level Architecture
The Architecture is designed to leverage maximum gain of recent hardware changes for database servers. SQL Server was designed to keep data on disk and move the data back and forth to the main memory as needed for processing. The design was the best at that time, because main memory was very expensive. Today memory has become much less expensive compared to the past, and organizations can afford to add large amounts of RAM to their servers (up to 4 TB for 64 bit processor.) Below is the high level architectural diagram which includes Hekaton:





Let’s try to understand above diagram, which explains the high level architecture. The Client application connects to TDS handler the same way, whether it wants to access a memory optimized table, or disk based table. From the diagram above, it is clear that interpreted TSQL can access memory-optimized tables using the interop capabilities — but the natively compiled stored procedures can only access memory-optimized tables.
Using In Memory OLTP you can create a table to be stored in main memory which is available to your OLTP workload for faster data access. This type of table is called memory-optimized table. The most significant difference between memory-optimized tables and disk-based tables, is that pages do not need to be read into cache from disk when the memory-optimized tables and the metadata about memory optimized tables, are stored in the SQL Server catalogs.
Since the pages are not involved for storage purposes here, there is no page latch which leads us to locking and blocking free processing. Writes to memory optimize tables use row versioning but tempdb is not involved here. Best Performance execution is achieved by using natively compiled stored procedures with memory-optimized tables, which is nothing but high level code and translated into machine code.

Important  Features
  • Single transaction can access and update data in both memory-optimized and disk-based tables (with few restrictions.)
  • In-memory optimized table works fine with Always on setup.
  • Natively compiled stored procedures to improve execution time.
  • No latches as data is  not stored on pages.
  • With memory-optimized tables, non-clustered indexes are not persisted to disk, so they take up no disk space
  • Hash Indexes.
In-Memory – Myths and Realities

Myth #1: In-Memory OLTP is the same/upgraded version of DBCC PINTABLE.
Reality: In-memory OLTP architecture is a separate component of the database engine, which allows you to create memory-optimized tables.  These memory-optimized tables have a completely different data and index structure. No locking is used, and logging changes to these memory-optimized tables is much more efficient than logging changes to disk-based tables.

Myth #2: If SQL Server crashes, all data is lost.
Reality: In case of a crash, the table is recreated with the help of transaction logs and checkpoint file. This checkpoint file keeps track of the changes to the data in the table and is stored in SQL Server file streams file group.
Conclusion

In-Memory OLTP could benefit you by converting large disk based tables to memory-optimized tables for faster processing.  It could benefit data warehouse solutions by speeding up the time it takes to pull data from a source system that is in-memory, into the data warehouse. Memory-optimized tables can be used as staging area too.