Search This Blog

SQL Server 2014 In-Memory Optimizer In-memory OLTP Engine (Hekaton) in SQL Server 2014 CTP2

Today I am going to discuss the new SQL Server 2014 feature In-memory OLTP a.k.a. Hekaton — a Greek word which means hundred or hundredfold. In-memory OLTP significantly reduces the time required for OLTP workload processing. The best part of this feature is that this is installed with the SQL Server 2014 Engine without requiring any additional installations. It also allows its benefits without rewriting your database application.
High Level Architecture
The Architecture is designed to leverage maximum gain of recent hardware changes for database servers. SQL Server was designed to keep data on disk and move the data back and forth to the main memory as needed for processing. The design was the best at that time, because main memory was very expensive. Today memory has become much less expensive compared to the past, and organizations can afford to add large amounts of RAM to their servers (up to 4 TB for 64 bit processor.) Below is the high level architectural diagram which includes Hekaton:





Let’s try to understand above diagram, which explains the high level architecture. The Client application connects to TDS handler the same way, whether it wants to access a memory optimized table, or disk based table. From the diagram above, it is clear that interpreted TSQL can access memory-optimized tables using the interop capabilities — but the natively compiled stored procedures can only access memory-optimized tables.
Using In Memory OLTP you can create a table to be stored in main memory which is available to your OLTP workload for faster data access. This type of table is called memory-optimized table. The most significant difference between memory-optimized tables and disk-based tables, is that pages do not need to be read into cache from disk when the memory-optimized tables and the metadata about memory optimized tables, are stored in the SQL Server catalogs.
Since the pages are not involved for storage purposes here, there is no page latch which leads us to locking and blocking free processing. Writes to memory optimize tables use row versioning but tempdb is not involved here. Best Performance execution is achieved by using natively compiled stored procedures with memory-optimized tables, which is nothing but high level code and translated into machine code.

Important  Features
  • Single transaction can access and update data in both memory-optimized and disk-based tables (with few restrictions.)
  • In-memory optimized table works fine with Always on setup.
  • Natively compiled stored procedures to improve execution time.
  • No latches as data is  not stored on pages.
  • With memory-optimized tables, non-clustered indexes are not persisted to disk, so they take up no disk space
  • Hash Indexes.
In-Memory – Myths and Realities

Myth #1: In-Memory OLTP is the same/upgraded version of DBCC PINTABLE.
Reality: In-memory OLTP architecture is a separate component of the database engine, which allows you to create memory-optimized tables.  These memory-optimized tables have a completely different data and index structure. No locking is used, and logging changes to these memory-optimized tables is much more efficient than logging changes to disk-based tables.

Myth #2: If SQL Server crashes, all data is lost.
Reality: In case of a crash, the table is recreated with the help of transaction logs and checkpoint file. This checkpoint file keeps track of the changes to the data in the table and is stored in SQL Server file streams file group.
Conclusion

In-Memory OLTP could benefit you by converting large disk based tables to memory-optimized tables for faster processing.  It could benefit data warehouse solutions by speeding up the time it takes to pull data from a source system that is in-memory, into the data warehouse. Memory-optimized tables can be used as staging area too.