Database Hosting Reference Architecture Guide_SQL_2014.docx

42
Database as a Service Reference Architecture Guide SQL Server 2014 - Based Database Services for Hosting Providers Published: April 2014 Microsoft Corporation

Transcript of Database Hosting Reference Architecture Guide_SQL_2014.docx

Page 1: Database Hosting Reference Architecture Guide_SQL_2014.docx

Database as a ServiceReference Architecture Guide SQL Server 2014 - Based Database Services for Hosting Providers

Published: April 2014Microsoft Corporation

Page 2: Database Hosting Reference Architecture Guide_SQL_2014.docx

Copyright information

This document is provided "as-is". Information and views expressed in this document, including URL and other Internet website references, may change without notice.

Some examples depicted herein are provided for illustration only and are fictitious. No real association or connection is intended or should be inferred.

This document does not provide you with any legal rights to any intellectual property in any Microsoft product. You may copy and use this document for your internal, reference purposes.

Microsoft, Hyper-V, SQL Server, Windows PowerShell, Windows, and Windows Server are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries. All other trademarks are property of their respective owners.

© 2014 Microsoft Corporation. All rights reserved.

Database as a Service Reference Architecture Guide 2

Page 3: Database Hosting Reference Architecture Guide_SQL_2014.docx

Contents1 Overview.............................................................................................................................................5

2 Hosted Services...................................................................................................................................5

3 Physical Hardware Infrastructure........................................................................................................7

3.1 Compute Sizing and Scaling.........................................................................................................7

3.2 Physical Networking Best Practices.............................................................................................8

3.3 Storage Best Practices..................................................................................................................9

3.4 SQL Server Data and Log File Partitioning....................................................................................9

3.5 Storage Spaces...........................................................................................................................10

3.6 High Availability Using Storage Solutions...................................................................................11

4 Virtual Machine Virtual Hardware.....................................................................................................12

4.1 Hyper-V Architecture.................................................................................................................12

4.2 Sizing Virtual Machines..............................................................................................................12

4.3 Virtualization Compute Overhead.............................................................................................13

4.4 Windows Server 2012 Core for Host and Guest.........................................................................13

4.5 Virtual Networking Best Practices..............................................................................................14

4.6 Virtual Storage Best Practices....................................................................................................14

4.7 Disaster Recovery Using Virtual Storage Solutions....................................................................15

4.8 Installing SQL Server..................................................................................................................15

4.9 Hyper-V Virtual Machine Manager and Sysprep........................................................................15

5 Virtual Machine Management...........................................................................................................16

5.1 System Center Virtual Machine Manager..................................................................................16

5.2 Virtual Machine Configuration...................................................................................................16

5.3 Live Migration and Storage Migration.......................................................................................16

5.4 SQL Server Service.....................................................................................................................17

5.5 Database File Growth................................................................................................................17

5.6 Dynamic Memory......................................................................................................................17

6 SQL Server 2014 Configuration and Management.............................................................................19

6.1 Installation Prerequisites...........................................................................................................19

6.2 PowerShell.................................................................................................................................19

6.3 Installing SQL Server..................................................................................................................20

3 Database as a Service Reference Architecture Guide

Page 4: Database Hosting Reference Architecture Guide_SQL_2014.docx

6.4 SQL Server User Rights...............................................................................................................20

6.5 Creating Tenant Databases........................................................................................................20

6.6 Contained Databases and Users................................................................................................22

6.7 Memory-Optimized Tables........................................................................................................24

6.8 Resource Governor....................................................................................................................25

6.9 Self-Service Administration........................................................................................................26

6.10 High Availability and Disaster Recovery Using SQL Server Solutions.........................................27

6.11 Policy-Based Management........................................................................................................28

6.12 Backups......................................................................................................................................28

6.13 Monitoring.................................................................................................................................28

6.14 System Center Operations Manager..........................................................................................29

7 Tenant Database Management.........................................................................................................29

7.1 Importing Data...........................................................................................................................29

7.2 SQL Server Migration Assistant..................................................................................................29

7.3 MAP Toolkit...............................................................................................................................29

7.4 SQL Server Management Studio................................................................................................30

7.5 Performance Tuning..................................................................................................................30

8 Summary...........................................................................................................................................30

Database as a Service Reference Architecture Guide 4

Page 5: Database Hosting Reference Architecture Guide_SQL_2014.docx

1 Overview

This guide to building the infrastructure for hosting Microsoft® SQL Server® Database as a Service (DBaaS) is not limited to a particular type of hardware. By using the features of SQL Server 2014 and Hyper-V® virtual machines with Microsoft System Center 2012, a hosting service provider can start with very small tenant databases and scale out or scale up to meet the needs of the largest and busiest SQL Server applications. This reference architecture includes hardware, software, system design, and component configuration.

2 Hosted Services

Figure 1: Database as a Service is a multitenant instance of SQL Server with isolation at the database level.

Database as a Service, for the purposes of this reference architecture, is a multitenant offering with isolation at the SQL Server database level. Many tenants can share an instance of SQL Server 2014 Enterprise Edition, each tenant with its own database. SQL Servers are hosted on Hyper-V virtual machines running Windows Server® 2012 R2 or using Windows® Core services. Hyper-V virtual machines are managed and monitored by System Center 2012 R2 Virtual Machine Manager and Operations Manager.

A hosted service provider (HSP), the intended audience for this guide, may offer a single server or many hundreds or thousands of servers. Servers may be in the same data center or distributed across data centers for load balancing and disaster recovery.

As an HSP, you make individual SQL Server databases available to the tenant with an agreed-upon maximum size and an agreed-upon amount of resources available. You are responsible for maintaining the SQL Server instance, the virtual host, and the physical host compute, network, and storage infrastructure.

5 Database as a Service Reference Architecture Guide

Page 6: Database Hosting Reference Architecture Guide_SQL_2014.docx

You can secure a single instance of SQL Server easily. When you use the Partially Contained Database and Contained Users features of SQL Server 2014, a database can be made into a secure environment. Users cannot access other databases or the metadata about tenant databases.

You can also use the SQL Server 2014 Enterprise Resource Governor to prevent a single user or a single tenant from using too much of the available resources, and resource usage can be balanced among tenants.

In a SQL Server 2014 DBaaS environment, each tenant is responsible for the data in his SQL Server database. Tenants create the database architecture consisting of objects that store data and application code to maintain and search data and return results to clients. Within the database, a tenant database administrator (DBA) can set permissions enabling subsets of tenant users and groups to carry out these tasks.

The architecture of the database and how well the tenant optimizes code that performs searches will have a direct impact on performance and resource usage. You may wish to offer services to help tenants to optimize design and code to improve response times and minimize resource usage.

As the HSP, you are responsible for Windows and SQL Server maintenance and your standard agreement to provide hosted SQL Server services should define your maintenance windows. You may patch SQL Server and Windows on the virtual and physical hosts, migrate the database to a new virtual machine host when resource requirements require it, and migrate the virtual machine to a new physical host during routine maintenance.

You may also perform certain database-level services on behalf of the tenant, such as running scheduled jobs and backups. Scheduled jobs can run common maintenance tasks like Update Statistics or large scale data modification and aggregation, such as month-end processing. You may also make tools, such as an API or a self-service control panel, available to the tenant to manage such jobs while restricting the tenant’s access to only the data she needs.

You can automate provisioning services by using management APIs and offer features like self-service provisioning to your customers. Such services reduce operational expenses for both you and your tenants.

You can monitor usage at a very detailed level with tools like Microsoft System Center Operations Manager and the SQL Server Management Pack. Tenants can be billed according to very broad guidelines or for only what they use.

Customers will see lower capital expenditures and total cost of ownership when they consolidate database servers, reduce the proliferation of on-premises applications, and share the cost of administrative expertise with other tenants. Tenants can take advantage of advanced solutions without having to buy and administer an entire enterprise solution.

By leasing SQL Server DBaaS, the tenant can pay for only those resources required for an application with the option to scale up in the future. There is no need to over-provision for what might or might not happen in the future, because when more capacity is needed, the hardware will be available.

Database as a Service Reference Architecture Guide 6

Page 7: Database Hosting Reference Architecture Guide_SQL_2014.docx

3 Physical Hardware Infrastructure

Figure 2: The physical computing resources, Hyper-V and System Center are infrastructure for cloud computing and Database as a Service.

The physical infrastructure includes compute resources, such as cores and RAM, storage resources, networking, Windows Server 2012 R2 as the host operating system, Hyper-V Server, and System Center 2012 Virtual Machine Manager.

The physical hardware in this architecture is used primarily for hosting virtual machines. We recommend hosting even a single instance of SQL Server used by a single tenant on a virtual machine in order to take advantage of Hyper-V features, such as:

Live Migration Virtual machine replication using Hyper-V replica Dynamic Memory Provisioning using templates

3.1 Compute Sizing and Scaling

The size of the database, number of concurrent busy connections, and the type of application will dictate hardware requirements for the best performance. Business intelligence, transaction processing, and data warehouses all have different requirements for optimal performance. You should help customers understand peak workloads because requirements will vary from one to another. If your customer plans to migrate an on-premises application to the cloud, you should evaluate the existing environment so you can ensure acceptable performance.

Most tenant databases have a small footprint, with small file sizes (usually less than 1 GB) and small memory and CPU requirements. If a tenant database will usually be static except during occasional processing or reporting periods, you can use less expensive server hardware to scale out.

Larger databases and/or databases with busier, heavier workloads may need virtual machines with upwards of eight cores and 64 GB of memory in order to achieve required performance. Enterprise SQL Server applications can scale up to large physical hardware resources. Windows Server 2012 R2 Hyper-V has greatly increased capacity over previous versions of Windows®.

Windows Server 2012 Hyper-V hostsLogical processors 320

Physical memory 4 TB

Virtual CPUs per host 2048

7 Database as a Service Reference Architecture Guide

Page 8: Database Hosting Reference Architecture Guide_SQL_2014.docx

A very large appliance can have 2 TB of RAM, 192 cores, over 50 TB of raw disk capacity, before partitioning, and a very high IO capacity. Such an appliance may run between 200 and 1,000 virtual machines, each virtual machine with its own instance of SQL Server. The limit to how many virtual machines the appliance can host is a function of how large the virtual machines are and how much resource they demand because of their workload―and how much oversubscription of memory and CPU resources there will be.

Physical servers hosting virtual machines achieve the best performance with 64-bit processors and Second Level Address Translation. This allows hardware to map virtual memory to physical memory rather than using the hypervisor. Also helpful for virtualization workloads is a large L2 cache.

To determine at what point scaling should take place, consider how robust the compute layer of your servers needs to be and what the desired ratio of virtual cores to physical cores should be. While oversubscription is necessary, the closer the ratio of virtual to physical cores the lower the chance that multiple databases and VMs will all demand CPU resource at the same time. The reason to having VMs demanding all resources at the same time, is that it results in more context switching and CPU usage at or near 100 percent.

3.2 Physical Networking Best Practices

Use NIC Teaming to combine physical network adapters into a single virtual network adapter for greater bandwidth and failover in case one NIC fails. Windows Server supports up to 32 NICs in a team.

Do not use NIC teaming for network attached storage, such as iSCSI or Fibre Channel over Ethernet. Storage networking should take advantage of multi-path IO.

In a host cluster, put the cluster heartbeat network on a separate subnet from the Hyper-V host management.

Make networks private. Use virtual network (VLAN) segments, and use encryption between all private network entities on which network segments are otherwise public.

NICs that support Single Root-IO Virtualization virtualize the NIC at the hardware level and expose the virtual NIC to the VM without operating system overhead. This can reduce the CPU overhead required for network traffic.

If client-server or server-server communications require encrypted connections, the VM can offload encryption and decryption to the NIC by using IPSec Task Offload. This reduces CPU overhead required for network traffic.

For other best practices, see the Physical NIC section of the Windows Server 2012 Hyper-V best practices.

3.3 Storage Best Practices

IO throughput is often the greatest initial obstacle to achieving the best performance. It is critical to have storage hardware that performs at optimum levels. You should consult with your customers about their

Database as a Service Reference Architecture Guide 8

Page 9: Database Hosting Reference Architecture Guide_SQL_2014.docx

storage performance requirements as you finalize each service level agreement (SLA). Be sure to consider what your tenant needs in terms of IO operations per second (IOPS).

Configure your physical storage for the best combination of performance and reliability for SQL Server database files (virtual machine pass through) or Hyper-V VHDX files that are, in turn, hosting SQL Server data and log files. Storage optimized for random access will return close to the same performance for either.

Use multi-path IO (MPIO) where possible for redundancy between hosts and storage devices. MPIO architecture supports iSCSI, Fibre Channel, and serial attached storage (SAS) SAN connectivity by establishing multiple sessions or connections to the storage array.

When using iSCSI for storage, isolate the iSCSI network from the host and VM networks. Use a dedicated range of IP addresses for storage devices.

System Center 2012 Virtual Machine Manager (VMM) can discover local and remote storage infrastructure. VMM can be used to communicate directly with storage and can create new LUNs and provision them to a Hyper-V host or cluster.

If storage devices support the new Microsoft Offloaded Data Transfer, copying large amounts of data, typically the slowest part of a database migration or backup and restore, can speed up substantially. The copy is offloaded to the hardware. The Offloaded Data Transfer is embedded in the Windows copy engine.

3.4 SQL Server Data and Log File Partitioning

The best practice for SQL Server data and log files traditionally has been to isolate them on separate physical storage. This is because if the partition where data files are stored is lost, but the log partition is still available, the transaction log may be backed up and added as the last backup in the set. This way, the last committed transaction can be restored.

With advances in storage technology, many administrators are confident enough of their storage redundancy and reliability that they are partitioning a single RAID 1+0 array for use by all database files, both data and log. Some SQL Server appliances use this approach and present only a single drive C: to the virtual machine.

For a physical host with a fewer total number of physical disks available and where a mixture of RAID partitioning schemes might be used, storage for a physical host can be tiered according to whether it is intended for the host operating system, guest operating systems, guest SQL Server data files, guest SQL Server log files, guest SQL Server Tempdb files, or backup files. However, with reliability and redundancy increasing, many storage administrators are opting for large disk arrays with everything in one place.

RAID 1+0 can be used for both data and log files. One configuration is to have many more spindles per LUN for data, for example 4x4 sets, resulting in a better combination of read and write performance and greater reliability. An array intended for the best write performance for log files might contain one or more 2x2 sets. For smaller arrays, RAID 1 might be used for the operating system partitions and log files while RAID 1+0 is used for data.

9 Database as a Service Reference Architecture Guide

Page 10: Database Hosting Reference Architecture Guide_SQL_2014.docx

Storage should, in the vast majority of cases, be optimized for random IO. A SQL Server database might have better performance from storage optimized for sequential IO when only a single database file occupies a set of spindles. When multiple databases are sharing storage, sequential IO turns into random IO. Optimizing storage for random IO can result in better performance overall for partitions sharing VHDX files or database files.

SQL Server supports the use of solid-state drive (SSD) storage. SSD storage may improve performance for write-intensive applications that make use of temporary tables in the Tempdb database.

SQL Server 2014 Enterprise Edition introduces the Buffer Pool Extension feature that allows the integration of SSD storage as a non-volatile RAM extension to the database engine buffer pool. A buffer pool extension is defined as a file on the SSD device. For this reason, it is optimal to have a non-fragmented drive. We recommend making the buffer pool extension file no more than 16 times the Max Server Memory configuration setting.

Test your storage throughput before deploying virtual machines. The SQLIO utility can be a barometer of what storage performance you can expect from SQL Server workloads. SQLIO workloads can confirm the storage configuration meets IOPs requirements.

The Hammer DB open source project is another tool for testing performance. This client uses either the TPC-C (OLTP workloads) or TPC-H (read-intensive workloads) database schemas along with simulating workloads from many users to discover where performance vulnerabilities exist with a SQL Server instance operating at peak resource consumption.

3.5 Storage Spaces

Figure 3: Storage spaces combine heterogeneous storage devices into a common pool.

Storage Spaces, a feature available with Windows Server 2012, organizes heterogeneous physical disks into storage pools. Disks can be different sizes and connected through different storage interconnects. You can expand storage pools by adding more disks. Storage Spaces provide high availability and iSCSI target support without the need for dedicated SAN hardware.

You can configure storage pools in one or more spaces. Spaces are virtual disks available to the operating system as partitions.

Storage Spaces have virtual capacities assigned to them. You may have only 5 TB of physical space but a logical capacity of 10 TB. As files are added and grow, and the physical storage reaches capacity, Storage Spaces raises alerts to add more space.

Database as a Service Reference Architecture Guide 10

Page 11: Database Hosting Reference Architecture Guide_SQL_2014.docx

Storage Spaces has resiliency to failure of the physical disks built in. You can associate a mirrored resiliency attribute with a space so that all data will have a mirrored copy.

In Windows Server 2012 R2, you can create virtual disks made up of Storage Tiers, that is, two tiers of storage. You can use an SSD tier for more frequently accessed data and a hard drive tier for less frequently accessed data. You can assign SQL Server tables to file groups by level of activity in order to store tables on specific tiers.

You should configure storage that hosts SQL Server files to write through to disk with no write-behind caching, especially for transaction log files. Storage Spaces can use SSD in a storage pool for a write-back cache that is tolerant of power failures. Test this feature to see if it improves the performance of random writes to data files.

3.6 High Availability Using Storage SolutionsPhysical hosts may be part of a Windows Server Failover Cluster where the virtual machines are a clustered resource hosted on shared storage. Windows Server 2012 R2 supports clusters of up to 64 nodes with up to 8,000 virtual machine instances in a single cluster.

You can install SQL Server as the clustered resource with shared storage for database files on a Windows Server Failover Cluster. In this scenario, the SQL Server service on the active node owns the files on shared storage.

SQL Server 2014 Standard Edition supports two node clusters. SQL Server 2014 Enterprise edition supports clusters with more than one secondary node.

Cluster Shared Volumes (CSV) is a feature of a Windows Failover Cluster where all nodes have simultaneous read-write access to the same LUNs. To avoid the 26 drive letter limitation, each node can access the CSV under the directory %SystemDrive%\ClusterStorage. You can use CSV along with synchronous replication of storage to provide high availability and load balancing. With SQL Server 2014, you can store database files for failover cluster instances on CSVs.

You can integrate Storage Spaces with CSVs. For more information, see How to Configure a Clustered Storage Space in Windows Server 2012.

11 Database as a Service Reference Architecture Guide

Page 12: Database Hosting Reference Architecture Guide_SQL_2014.docx

4 Virtual Machine Virtual Hardware

Figure 4: Database as a Service uses a single SQL Server 2012 Enterprise instance per virtual machine.

It is important to understand how Hyper-V manages virtual machines on a physical host and how you can, in turn, configure the VMs to use resources for resource-intensive offerings like DBaaS. There are specific settings for the Hyper-V hypervisor and virtual machines to optimize performance when hosting SQL Server workloads. There are also best practices that help with manageability. The goal is to set up physical and virtual hardware to easily provision Hyper-V virtual machines that are built to host SQL Server instances of any size and resource requirement.

4.1 Hyper-V Architecture

Virtualization adds layers between the hardware and the application. This means additional overhead in managing the virtual resources.

The partition is the level of isolation in Hyper-V, and guest operating systems run inside partitions. The hypervisor has a parent, or root, partition running Windows Server 2012 R2. The root partition creates child partitions in which the Windows Server 2012 R2 guest virtual machines run. Child partitions and the guest operating systems have a virtual view of hardware, which means, for example, they do not handle processor interrupts.

4.2 Sizing Virtual Machines

To prevent performance bottlenecks due to oversubscription, it is important to size virtual machines to be in alignment with their resource requirements. Monitor resource consumption to track how much resource a given VM tends to use. This data will help you determine how many VMs of a given workload can co-exist on a physical host of a given size.

SQL Server can experience performance bottlenecks with disk writes for transaction log commits or disk reads during searches with excessive page scanning. Another common performance problem related to searches happens when many large result sets are sent to clients at the same time. This creates a network bottleneck. Issues with lock management, cache management, large numbers of client sessions, and others related to management of oversubscribed resources can also result in CPU bottlenecks. If not enough CPU resources are available, context switching also contributes to high CPU workloads.

Database as a Service Reference Architecture Guide 12

Page 13: Database Hosting Reference Architecture Guide_SQL_2014.docx

After initially allocating a fixed amount of memory to be shared among all the databases on a SQL Server through a virtual machine template, you can use SQL Server dynamic management views and System Center Operations Manager to watch for trends in what the memory requirements are over time for a given virtual machine or for a specific database. The memory available on a physical server can be reallocated based on these trends. When memory is reallocated and a virtual machine has more memory available, SQL Server can automatically make use of it.

If you use Hyper-V Dynamic Memory to ensure that VMs are not over or under provisioned, you could see improvements in how shared memory is used.

In addition to virtual RAM and core requirements, make sure each provisioned VM gets the storage resource it requires. A small, medium, or large storage IOPs throughput may be needed. When a SQL Server application runs on a storage device that does not provide the needed IOPS, performance suffers. It is important to understand what the tenant expects and what the application requires to perform at that level.

Some sizing considerations for virtual machines are:

A virtual machine should have a minimum of 2 GB RAM and 2 cores available.

A medium size virtual machine would include 16 GB RAM and 8 cores.

A large virtual machine template would include 32 GB RAM and 16 cores.

The following are the maximum capabilities of Hyper-V virtual machines in Windows Server 2012 R2.

Capabilities of Hyper-V Virtual Machines in Windows Server 2012Virtual CPUs per virtual machine

64

Memory per virtual machine 1 TB

Active virtual machines per host

1024

Guest NUMA Yes

4.3 Virtualization Compute Overhead

To get the best performance from Hyper-V, the root partition should only be running a Hyper-V role. No other applications should be allowed in the Windows instance on the physical host.

The layers added by the work of the hypervisor root partition and child partitions add approximately 10 percent to 12 percent CPU overhead. Memory overhead will typically be 512 MB if using the Windows GUI on the parent host, about 80 MB if using Windows Core. There is an additional small amount of memory overhead per virtual machine. Using the Hyper-V management console from the host physical server also adds memory overhead, as much as 1.5 GB.

4.4 Windows Server 2012 Core for Host and Guest

Server Core installation is the default installation option for Windows Server 2012 R2.

13 Database as a Service Reference Architecture Guide

Page 14: Database Hosting Reference Architecture Guide_SQL_2014.docx

Hyper-V virtual machines obtain better performance and run with a smaller footprint on Windows Server Core. Windows Server Core does not install features related to the graphical user interface in Windows Server 2012 R2, such as Explorer, Microsoft Internet Explorer®, and the desktop start screen. A virtual machine will occupy about 4 GB less storage space per virtual machine without these features. This savings can result in more virtual machines per host.

With fewer installed features, there are many fewer patches to install, which means greater uptime.

Fewer installed features also mean better security, because there is a smaller attack surface.

4.5 Virtual Networking Best PracticesWindows Server 2012 R2 allows a virtual machine to have virtual network adapters connected to more than one Hyper-V switch and to still have connectivity even if the network interface card under that switch becomes disconnected.

We recommend that you do not share the virtual machine network adapter with the host operating system. The physical NIC used by VMs should not have an IP address assigned to it.

Hyper-V Manager can create a separate virtual switch for each NIC or NIC team. This allows tenants to be isolated on private networks. Throughput requirements can be closely monitored.

If using Hyper-V Replica or Live Migration, reserve a NIC or NIC team for dedicated use.

4.6 Virtual Storage Best PracticesHyper-V VHDX files are virtual disk files with a new virtual format and a 64K block size. VHDX files can be up to 64 TB in size and are faster than older VHD files. With a speed comparable to virtual machines that utilize pass through to write directly to storage, VHDX files also enable the use of Hyper-V Replica and Live Migration.

You can achieve better performance with a fixed size VHDX file. For performance gains, allocate the space required at the time the file is created. Dynamic allocations have the potential to slow down all the virtual machines sharing the same storage while that write-intensive activity is taking place.

Another way to get better performance is to limit or prevent file growth events for SQL Server database files. To avoid dynamic memory demands being placed on shared storage at the same time by multiple database users or tenants, provision all the allocated storage to databases when created. This prevents delays during file growth. If the VHDX file has to grow to accommodate a SQL Server database file that is also growing, both of these resource-intensive events can cause substantial performance bottlenecks.

Hyper-V Virtual Fibre Channel allows virtual machines in Windows Server 2012 R2 to make use of fibre channel ports within the guest operating system. VMs can connect directly to fibre channel by having their own virtual fibre channel host bus adapters.

Database as a Service Reference Architecture Guide 14

Page 15: Database Hosting Reference Architecture Guide_SQL_2014.docx

4.7 Disaster Recovery Using Virtual Storage Solutions

Hyper-V Replica allows virtual machines to be replicated asynchronously from a primary physical host to a secondary host at a disaster recovery site. This solution does not require any shared storage.

You should test whether the best performance and least write latency can be obtained with Hyper-V Replica or SQL Server AlwaysOn Availability Groups. Availability groups use asynchronous commits to write to DR secondary replicas.

4.8 Installing SQL Server

A SQL Server installation can be part of a virtual machine template. You can install SQL Server through the GUI version of the Setup program, or you can run Setup from a command line and supply command line arguments and a configuration file containing all the installation options. The output from the GUI Setup is a configuration file, and the path to the file is displayed on the last page of Setup before installation.

A configuration file is a text file that can be modified and used on subsequent calls to Setup from the command line. A few options are not included in the configuration file; for example, the license product ID and passwords for domain user accounts used for services. These can be specified as separate command line arguments.

For more information, see the Installing SQL Server in the SQL Server 2014 Configuration and Management section later in this guide.

4.9 Hyper-V Virtual Machine Manager and Sysprep

Hyper-V virtual machine images, including SQL Server installations, can be prepared using Windows SysPrep and SQL Server SysPrep. SysPrep creates generalized VM images that do not have any computer-specific information, which allows you to have reference image templates of various configurations. When a copy of a Windows image boots for the first time, Windows-specific information is added, then you can run SQL Server Setup again to complete the SQL Server configuration.

After an image is configured and SQL Server is usable, run any post-deployment scripts to prepare the SQL Server instance for tenant use. Post-deployment scripts can be developed using the SQL Server Data Tools.

15 Database as a Service Reference Architecture Guide

Page 16: Database Hosting Reference Architecture Guide_SQL_2014.docx

5 Virtual Machine Management

Figure 5: System Center, SQL Server Management Studio and Windows PowerShell are powerful client tools for managing virtual machines and SQL Server.

5.1 System Center Virtual Machine Manager

System Center Virtual Machine Manager (VMM) configures and tracks virtual data center resources including physical hosts, virtual servers, virtual storage, and the virtual network. VMM templates allow virtual machines of various sizes and resources to be created for easy deployment and provisioning. Templates can include Windows Server 2012 R2 and SQL Server 2014 Enterprise.

5.2 Virtual Machine Configuration

There are VM and operating system configuration items that can help with performance.

The power scheme in the VM Windows guest should be set to High Performance.

No other applications other than SQL Server and related services should be running on the VM.

If the application has very heavy network requirements, consider creating a NIC team from multiple virtual NICs on the VM.

Use Single Root IO Virtualization when supported by NICs.

5.3 Live Migration and Storage Migration

Live Migration allows a virtual machine to be copied from one Hyper-V host to another while the VM is running without interruption. Virtual machines can be copied within a cluster, between clusters, or between stand-alone servers. Shared storage and cluster nodes are no longer required for Live Migration in Windows Server 2012. Live Migration will be typically used by an administrator to migrate a virtual machine to a different physical host.

Storage Migration allows VHD and VHDX files to be moved to different physical storage devices while the VM is running, without interruption. This allows an administrator to move an entire SQL Server data or log partition to new storage without any application down time.

Database as a Service Reference Architecture Guide 16

Page 17: Database Hosting Reference Architecture Guide_SQL_2014.docx

Live Migration and Storage Migration tasks can be carried out using PowerShell cmdlets for Hyper-V.

5.4 SQL Server Service

The domain account assigned to the SQL Server service should be granted the user right Lock Pages in Memory on the VM. This is especially important if Hyper-V Dynamic Memory will be used.

5.5 Database File Growth

Database files should use fixed sizes. If the tenant is allowed a certain amount of storage, allocate it all when creating the database for the best performance. File growth events can slow the performance of all the tenants sharing that storage.

If database files are allowed to grow, establish a growth increment (in gigabytes) that is a fixed amount. Do not use a percentage of file size as a growth increment. This can slow performance, especially with very large database files.

5.6 Dynamic Memory

Hyper-V Dynamic memory allows more RAM to be dynamically allocated to busier VMs. When this happens, less busy VMs may have RAM removed.

If the amount of memory available to the VM is reduced to the point where the operating system does not have enough, SQL Server will give memory back to the operating system, which can slow SQL Server performance.

When you are determining how much memory to make available to SQL Server, you should keep in mind that SQL Server benefits from more memory the busier it is. When SQL Server needs the data for a table or index contained on a page that is not already in buffer pool memory, SQL Server loads it from disk. SQL Server will claim memory from the operating system for the buffer pool as long as it has room to expand as defined either through a configuration option or the maximum available in Windows.

If SQL Server can keep table and index pages in cache for longer periods of time, performance will improve. If memory is limited, SQL Server may have to remove a page from memory in favor of another page, only to have to go back and load the previous page from disk again when it is required. When this happens, disk activity increases, more CPU is needed to manage the process and performance slows.

Another requirement to consider is the memory needed to load in-memory OLTP tables and indexes. This memory is claimed immediately when the SQL Server service restarts, unlike buffer pool memory that is claimed gradually as needed. Memory requirements for in-memory OLTP tables and indexes may increase as new rows and new versions of rows (updates to existing rows) are added to the table. SQL Server database engine garbage collection will eventually remove old versions of rows from memory. Until that happens, enough memory to maintain multiple versions of a row is needed.

For busy virtual machines, Hyper-V can detect memory pressure and allocate additional memory to the VM. When additional memory is available to the operating system, SQL Server detects it and can claim it. The memory is typically made available to the buffer pool or wherever SQL Server has the greatest need.

17 Database as a Service Reference Architecture Guide

Page 18: Database Hosting Reference Architecture Guide_SQL_2014.docx

When Hyper-V removes memory from a VM, the size of the SQL Server buffer pool must also decrease. If the Lock Pages in Memory user right was granted on the server to the domain account SQL Server uses, Windows will not write the released pages to the Pagefile and there will not be an impact on performance. However, if less memory than needed is available, there will not be enough for the buffer pool cache. Performance will be much slower if this happens while significant memory is needed for in-memory OLTP table.

For these reasons, it is important not rely too much on dynamic memory to sort out oversubscribed memory conflicts, because it can cause the SQL Servers to be much slower.

Database as a Service Reference Architecture Guide 18

Page 19: Database Hosting Reference Architecture Guide_SQL_2014.docx

6 SQL Server 2014 Configuration and Management

This section is a guide to installing, configuring and managing the SQL Server that will be hosting tenant databases. Emphasis is on what the HSP's DBA will have to do from a service level.

6.1 Installation Prerequisites

Before you begin to install SQL Server, Windows must have PowerShell 2.0 installed. The .NET 3.5 Framework should also be installed. Not having these installed for any features selected during SQL Server Setup that require them will cause the installation to fail.

6.2 PowerShell

SQL Server 2014 supports the use of PowerShell for automating administrative tasks such as deployment, Windows Clusters management, database creation, data imports, and job management. When you import the SQLPS module into the PowerShell 2.0 or later environment, the module loads and registers SQL Server components that can be used from the PowerShell console or in a script.

The SQLPS module allows the same SQL Server Management Objects (SMO) used by SQL Server Management Studio to be used by PowerShell. SQLPS also allows you to execute Transact-SQL scripts. The SQL Server Agent service can execute PowerShell scripts as job steps.

You may find a set of PowerShell scripts to be an important tool in database provisioning for tenants. These tools can also help DBAs avoid costly mistakes.

Developers can write and debug PowerShell scripts in the PowerShell Integrated Development Environment. You can also run scripts from the PowerShell console simply by referencing the file name. Often the requirements of the script will require the console be run as administrator.

In the following PowerShell example, lines beginning with ## are comments; lines beginning with PS> are executed at a PowerShell console command prompt or in a script.

##setting execution policyPS>Set-ExecutionPolicy unrestricted

## Import with -DisableNameChecking to suppress warningsPS>Import-Module "sqlps" -DisableNameChecking

PowerShell often returns little error information when the code generates an exception. If you expose the contents of the PowerShell $Error variable from a PowerShell command prompt immediately following the exception, you can find the true source of the error that was not available in the generic message.

PS C:\mc> $error[0]|format-list -force

19 Database as a Service Reference Architecture Guide

Page 20: Database Hosting Reference Architecture Guide_SQL_2014.docx

6.3 Installing SQL Server

The Setup program on the SQL Server installation media accepts command line arguments and a configuration file can be used for an automated installation. Setup can also launch a step-by-step wizard for all the installation choices. A SQL Server installation can be part of a virtual machine template (see VM Virtual Hardware, above).

If SQL Server is installed from the GUI, it produces a configuration file as output. The path to the configuration file is displayed on the last screen of the Setup wizard before installation. It is a text file that you can modify and pass to Setup on the command line for subsequent installations. A few options are not included in the configuration file; the license product ID and passwords for domain user accounts used for services. These can be specified as separate command line arguments.

6.4 SQL Server User Rights

There are permissions that can be assigned to the SQL Server service domain user account that are not assigned by default and will improve performance. Granting the Lock Pages in Memory user right on each node for the SQL Server service account will reduce memory pressure and improve performance. After you assess the security risk, you may also choose to grant this user account the Perform Volume Maintenance Tasks user right on each node for the SQL Server service account. This will substantially improve performance of file growth events.

SQL Server uses Windows authenticated logons only by default. If client connections will be coming from untrusted domains or workgroups, then SQL Server authentication must also be used.

6.5 Creating Tenant Databases

Once the SQL Server service is installed and configured, you are ready to add tenant databases. A good practice is to keep scripts on hand for the create database commands. You can create and save Transact-SQL scripts or PowerShell scripts using SMO for common configurations.

The tenant may request a specific name for their database, one that is required by an application. This may affect the VM on which the database is allowed to reside as database names must be unique.

Database properties to specify at the time the database is created can include:

If this is a partially contained database on a multi-tenant SQL Server instance.

Data files, log files, and their locations. SMB file shares can be used. See the Physical Hardware Infrastructure, Storage Best Practices section about for notes on keeping data and log files on separate physical storage.

A Filegroup and file locations for in-memory OLTP data files.

The default collation for the database if it is other than the server default.

The initial size, maximum size, and file growth increments for data and log files. Best practice is that this initial space allocation be the full maximum database size and that the files not be allowed to grow. Substantial performance degradation can accompany file growth events.

Database as a Service Reference Architecture Guide 20

Page 21: Database Hosting Reference Architecture Guide_SQL_2014.docx

The following PowerShell example could be run as a script file. It creates a SQL Server database, HostedDb, with a single 50GB data file E:\SqlData\HostedDbData.mdf and a single 30GB log file F:\SqlLog\HostedDbLog.ldf. Note that SMO accepts the size only as kilobytes. Both files are configured to be fixed in size and will not grow if they run out of space.

The grave accent (`) character at the end of a line indicates the statement continues on the next line. If you move all the lines in a statement to a single line, remove the grave accent characters.

# assumes the default instance of SQL Server on Localhost

$server = New-Object Microsoft.SqlServer.Management.SMO.Server ".";

# assign names to variables

$databaseName = "HostedDb";

$fileGroupName = "PRIMARY";

$dataFileLogicalName = "HostedDbData";

$logFileLogicalName = "HostedDbLog";

$dataFilePath = "E:\SqlData\HostedDbData.mdf";

$logFilePath = "F:\SqlLog\HostedDbLog.ldf";

# new database object contains database properties$newDatabase = New-Object Microsoft.SqlServer.Management.SMO.Database `

($server, $databaseName);

# a FileGroup object is required to contain the DataFile object$primaryFilegroup = New-Object `

Microsoft.SqlServer.Management.SMO.FileGroup `

($newDatabase, $fileGroupName);

# Add the filegroup object to the database object

$newDatabase.FileGroups.Add($primaryFilegroup);

# the new datafile object sets properties for the database data file$dataFile = New-Object `

Microsoft.SqlServer.Management.SMO.DataFile `

($primaryFilegroup, $dataFileLogicalName);$dataFile.FileName = $dataFilePath;

# add the data file to the primary filegroup$primaryFilegroup.Files.Add($dataFile);

# logfile object sets properties for the transaction log$logFile = New-Object Microsoft.SqlServer.Management.SMO.LogFile `

($newDatabase, $logFileLogicalName);

$logFile.FileName = $logFilePath;

21 Database as a Service Reference Architecture Guide

Page 22: Database Hosting Reference Architecture Guide_SQL_2014.docx

# file sizes in KB

$dataFile.Size = [double](52428800);

$logFile.Size = [double](31457280);

# No growth on these files

$dataFile.GrowthType = "None";

$logFile.GrowthType = "None";

# Add log file to the database

$newDatabase.LogFiles.Add($logFile)

# create the database

$newDatabase.Create()

If the above code generates an error, expose the PowerShell $Error variable contents by running the following from a PowerShell command prompt.

PS> $error[0]|format-list -force

6.6 Contained Databases and Users

The most important security concern on a multi-tenant SQL Server is to keep tenant users confined to their own database. A tenant user should not be able to see data in any other database, including system databases Master and MSDB. You set these boundaries through partially contained databases and contained users.

A contained database is isolated from all other databases. All user login information and metadata are stored in the database.

To enable a contained database, a server configuration option must first be set to allow contained databases on the SQL Server. The Containment property can then be set on the database. The following PowerShell example shows how the configuration option can be set by passing Transact-SQL code (the sp_configure stored procedure and reconfigure command) through the Invoke-Sqlcmd cmdlet.

Set-Location SQLSERVER:\SQL\LocalHost\Default

Invoke-Sqlcmd -Query `"exec sp_configure 'contained database authentication', 1;reconfigure with override;" -ServerInstance (Get-Item .)

The containment for example database, HostedDb, is set by using an SMO object's ContainmentType property. The database is put into single user mode to obtain the necessary exclusive lock before setting the containment option.

# assumes the default instance of SQL Server on Localhost$server = New-Object Microsoft.SqlServer.Management.SMO.Server ".";

# assign names to variables

Database as a Service Reference Architecture Guide 22

Page 23: Database Hosting Reference Architecture Guide_SQL_2014.docx

$databaseName = "HostedDb";$containment = "Partial";

$db = $server.databases[$databaseName];

$db.UserAccess = "Single" ;

# the following is intended to be on a single line$db.Alter([Microsoft.SqlServer.Management.Smo.TerminationClause]::RollbackTransactionsImmediately) ;

$db.ContainmentType = ` [Microsoft.SqlServer.Management.Smo.ContainmentType]::Partial;$db.UserAccess = "Multiple" ;$db.Alter();

Contained users have their credentials contained within a single database. The client must connect by using that database name as part of the connection string. The user may not read any metadata outside the database including in the Master or MSDB databases, even though those databases have a guest user by default. By keeping a user completely isolated, each separate database is a secure container for each tenant without any overlap.

Management issues can occur if a tenant needs access to another database to complete a task. For example, with job creation and management, data is stored in the MSDB database. Consider how jobs and server-level activities like database file expansion or backups restoration will be accomplished on a multi-tenant SQL Server instance.

There are security concerns in contained databases if the user gains access to the database as a Windows principle instead of a contained user. DBAs need to take care that they create a contained user when that user is not intended to have permissions outside the contained database. See Security Best Practices with Contained Databases.

The SQL Server dynamic management view sys.dm_db_uncontained_entities lists any uncontained objects in the database, that is, those objects that cross a database boundary. In SQL Server 2014, in-memory optimized tables (see below) are returned by this view as uncontained.

The following PowerShell example creates a Windows domain account using the New-ADUser cmdlet, which prompts the user for the password. The script then creates a database user from a Windows domain user. The database user is not mapped to a login and so will be a contained user in the database.

$databaseName = "HostedDb";$windowsPrincipal = "User1";$userPrincipal = "Domain1\User1";

New-ADUser -SamAccountName $windowsPrincipal `-AccountPassword (read-host "Set user password" -assecurestring) `-name $windowsPrincipal -enabled $true -PasswordNeverExpires $true `-ChangePasswordAtLogon $false;

$server = New-Object Microsoft.SqlServer.Management.SMO.Server ".";

23 Database as a Service Reference Architecture Guide

Page 24: Database Hosting Reference Architecture Guide_SQL_2014.docx

$db = $server.databases[$databaseName];$user = New-Object Microsoft.SqlServer.Management.SMO.User ($db, $userPrincipal);$user.Create();

6.7 Memory-Optimized Tables

Memory optimized tables, also known as in-memory OLTP (online transaction processing) tables, are introduced in SQL Server 2014 Enterprise Edition. The following features distinguish memory-optimized tables from conventional SQL Server tables.

Memory-optimized tables have a different underlying structure than conventional tables and indexes. They are not stored on pages.

Memory-optimized tables are always in memory.

Memory-optimized tables require enough memory to store the entire table, indexes, row headers, plus copies of different versions of the same rows after updates occur.

There is a separate in-memory OLTP database engine for data search, access, and modification operations that is integrated with the SQL Server database engine.

Memory-optimized tables do not use locks to isolate readers and writers.

Durability is achieved by writing finished transactions to the database transaction log.

Transactions are atomic, consistent, and isolated by keeping different versions of a row after an update or delete until they are no longer needed.

If there is a write-conflict on a row between two separate transactions, SQL Server will pick one of the transactions to roll back.

Stored procedures that access memory-optimized tables can be compiled once using the C/C++ compiler and stored as a DLL. CPU time is saved by not having to recompile often.

SQL Server's conventional tables and indexes are stored on 8KB pages on disk and are cached in buffer pool memory only when required. Cached pages can, in turn, be removed from the buffer pool in favor of newly in-demand pages. By keeping data in memory, the in-memory OLTP database engine achieves greater performance for small transactions and finding small sets of rows.

Memory-optimized tables are transparent to conventional tables in code. SQL statements can use the join or union operators with conventional tables and memory-optimized tables.

In order to achieve even greater performance, SQL Server 2014 supports delayed durability of transactions. Delayed writes to the transaction log take place after a transaction commits. In return for this increased performance, data may be lost if a failure occurs before the write to disk takes place.

There are important considerations for using in-memory tables in a VM environment. More memory will be required to load memory-optimized tables when SQL Server starts and the VM needs to have that amount of memory available. Waiting for dynamic memory to force other VMs to reduce their usage to make memory available can increase the time it takes for a database to be available to client

Database as a Service Reference Architecture Guide 24

Page 25: Database Hosting Reference Architecture Guide_SQL_2014.docx

connections. The number of VMs on a physical host could be reduced when one or more SQL Server instances have memory-optimized tables.

For tenants who require memory-optimized tables, your role will be to help calculate the amount of memory required and balance memory requirements with other tenants on a VM.

Memory-optimized tables are stored in files in their own separate filegroups and are loaded when the SQL Server service starts. If these tables are very large, the loading process can be sped up by keeping these tables on the fastest available storage for reads.

Memory-optimized objects are marked as breaking containment in contained databases by the SQL Server dynamic management view sys.dm_db_uncontained_entities.

6.8 Resource Governor

Resource governor is a feature of SQL Server 2014 Enterprise edition that limits resource consumption in order to maintain reliable performance expectations in a multi-tenant environment. Tenants with mixed workloads or inefficient code can be prevented from dominating resource consumption and limiting other tenants or even other VMs on the same host. Defining and implementing resource governor pools, workloads, and classifiers will be an important part of how you balance memory, CPU, and IO consumption between the tenants on a SQL Server instance.

Resource governor allows definitions of resource pools in terms of minimum and maximum CPU, memory, and IOPS per disk volume. Classifier functions map a client session to a workload group; the workload group is in turn mapped to a resource pool. The available resources in the pool are divided among the sessions in the workload groups mapped to that pool.

There can be multiple workload groups and resource pools per tenant database. For example, a backup job or a report might be limited while allowing month-end processing to have more resources.

Setting maximum IOPs limits per volume, a new feature starting with SQL Server 2014 Enterprise Edition, will help reduce the impact from maintenance operations such as rebuilding indexes and updating statistics. You can also limit how maintenance tasks use available CPU such as by backup compression. Building a separate resource pool for maintenance and defining a workload group for maintenance tasks can isolate these types of resource conflicts.

Resource governor can also be used to guarantee in-memory OLTP tables have enough memory. You can create a resource pool and bind it to the database containing the memory-optimized table for this purpose. You will need to work with the tenant to determine the amount of memory a memory-optimized table will need and set the resource pool to guarantee that amount or greater. To fix the amount of memory available to the memory-optimized table, the maximum memory percentage could be set to the same value as the minimum memory percentage.

Resource governor can return out of memory errors if the amount of memory required for a memory-optimized table is not enough to satisfy the workload needs. You may need to work with the tenant to resolve out of memory issues, such as by increasing the percentage of memory available to a memory-optimized table.

25 Database as a Service Reference Architecture Guide

Page 26: Database Hosting Reference Architecture Guide_SQL_2014.docx

Resource governor is disabled by default. You can enable it using the Transact-SQL ALTER RESOURCE GOVERNOR command. This white paper is an authoritative resource for creating and testing resource pools and workloads.

In PowerShell, you can expose properties and methods to view or change resource governor settings by using the Server object, ResourceGovernor property.

6.9 Self-Service Administration

While the tenant can create objects, write code, and set permissions within their own databases, there are server-level administrative activities outside the scope of the tenant's contained database that you will need to find a way to handle. Most common are scheduled jobs, but these activities can also include provisioning, such as increasing the storage space available to a database, and restoring databases into a defined structure for reporting or testing.

You can automate a large part of the provisioning process by giving the tenant self-service tools to use that have a look and feel similar to services like Windows Azure™. For example, a Microsoft ASP.NET client application might allow the tenant to perform only selected administrative tasks through a website user interface. Those tasks are validated and carried out with elevated permissions in code behind the website written using Microsoft ADO.NET.

Windows Azure Pack for Windows Server is one such set of tools. It runs on top of Windows Server 2012 R2 and System Center 2012 R2, and provides multi-tenant cloud services including SQL Server instance or database deployment. A customer facing web UI uses an API to deploy cloud services.

On a smaller scale, a database management middle-tier client might allow a tenant to create or expand an existing database, create and manage scheduled jobs, add contained users to the database, and even restore backups, all with the restrictions the provisioning process requires. The user would not be able to act outside the restrictions of the environment.

Scheduled jobs are an important part of maintaining a database. Jobs may run backups, aggregate data, move data around, generate reports, and validate data integrity, along with routine maintenance like Update Statistics and rebuilding indexes. The SQL Server Agent Service is the primary tool for monitoring job schedules and running jobs at the designated time along with retrying in case of errors and reporting status. Jobs can include SQL scripts, PowerShell commands, or a console application that runs from a command prompt. Jobs can generate events that cause alerts to fire, such as sending an email when a job fails.

Metadata about scheduled jobs, their alerts and schedules is contained in the MSDB database on each SQL Server instance. On a single-tenant SQL Server, a DBA would be able to view and manage jobs for any database. In a multi-tenant environment, having a way for tenants to manage their own jobs is a challenge. You must ensure that each tenant can see only the data about their own jobs and job history and not that of other tenants.

If you do not want to have your personnel be responsible for managing jobs and the delays in responding to the customer that can go along with that, you may want to develop a way for the tenant to manage jobs. The tenant should have a way of creating, viewing, and modifying properties of their own jobs, such as schedules, changing the job steps, and viewing job history. You may limit the types of jobs the

Database as a Service Reference Architecture Guide 26

Page 27: Database Hosting Reference Architecture Guide_SQL_2014.docx

tenant can run, for example, allowing Transact-SQL scripts but not allowing file system access to run executables or batch files.

SQL Server Management Objects (SMO), the .NET namespace for SQL Server administrative activities, contains robust API for managing jobs. Using SMO, PowerShell Cmdlets, or SQL, .NET code could run as a middle tier behind an administrative website giving the tenant the ability to view and manage jobs running in their own database while not allowing them to see data outside the tenant database. Middle-tier code can execute at the level of permissions necessary to manage jobs while validating the user’s input and filtering output according to the desired restrictions. Besides security, these restrictions can prevent jobs requiring significant resource (such as backups) from running at the same time.

While such a development effort would have significant up front cost, it can be a worthwhile investment, because it allows tenants to manage their own jobs and reduces your labor-intensive database administration work.

6.10 High Availability and Disaster Recovery Using SQL Server Solutions

SQL Server 2014 introduces AlwaysOn Availability Groups for high availability and disaster recovery. Availability groups can be thought of as an ongoing, streaming transaction log restore from a single primary write server to one or more secondary replicas. An extensive set of PowerShell cmdlets is available for creating, managing, and monitoring the health of availability groups.

Availability groups can write each transaction synchronously to another server in the group and automatically fail over for high availability. For disaster recovery, SQL Server can write transactions to secondary servers asynchronously, typically with very short latency. Failover to asynchronous secondary replicas is manual, with the possibility of data loss should a transaction commit on the primary replica and the failure occur before the write takes place on the secondary replica.

Availability groups are built on top of Windows Failover Cluster instances. The cluster services on each node keep track of the health of the nodes and trigger the failover should the primary node fail. SQL Server carries out the work of an automatic or manual failover and recovers the secondary replica and then brings it online as the primary replica.

Availability Group Listeners are network names and IP addresses assigned to an availability group. SQL Server and Cluster Services automatically point the listener at the primary node for client connections. When a failover occurs and the new primary replica comes online, new client connections automatically go to the new primary node.

Read-only routing is a feature of availability groups that can help with multiuser concurrency issues and load balancing. With read-only routing, when client connections specify they are read-only, the connection can be routed to a secondary replica. Read-only connections can survive the loss of the primary replica and the loss of the cluster quorum with the secondary replicas remaining available.

Fewer read-only connections on the primary replica mean fewer blocking locks between readers and writers. If read-only connections are doing large-scale scans of data, the secondary nodes help absorb this potentially heavy CPU and IO resource consumption.

27 Database as a Service Reference Architecture Guide

Page 28: Database Hosting Reference Architecture Guide_SQL_2014.docx

The data on the secondary replicas is typically within a few seconds of being up to date. Latency on the secondary replica can be measured if it must be within limits. With SQL Server 2014 there can be up to eight secondary replicas.

6.11 Policy-Based Management

Policy-based management can help maintain the tight control over SQL Server instances and databases. Policies are defined that are periodically checked for compliance. This is a type of auditing that eliminates the need for manual auditing by writing code. It can include any of a number of conditions such as database mail is disabled or accessible only by a particular client.

When a condition is found that is out of compliance, an alert or a critical health warning is displayed through SQL Server Management Studio. Code executed through the SQL Server Agent Service using Transact-SQL or Windows PowerShell® may be used to fix the condition.

6.12 Backups

Backups of databases and transaction logs should be scheduled periodically in accordance with SLAs. Backups are typically run as scheduled jobs by the SQL Server Agent service.

If a database is set to the simple recovery model, only full and differential database backups can run and a database can be restored only as far forward as the last full database backup. The transaction log of the database is automatically truncated periodically when data cache is flushed. Space in the log files is recycled for new transactions.

If the database is set to the full or bulk-logged recovery models, the transaction log should be backed up as often as necessary to stay in alignment with SLAs. This is typically every 15 to 30 minutes. A full database backup might be done once a week with a differential backup done every day for busy databases.

You may be managing backup schedules in order to avoid having many tenants running backup jobs at the same time. Backup files can also be encrypted starting with SQL Server 2014. The PowerShell object class SqlBackupEncryptionOption and the Backup - SqlDatabase cmdlet can be used to do the backup in a PowerShell script.

6.13 Monitoring

Careful monitoring of physical hosts, VMs, and SQL Servers will enable you to see performance and application trends and events. With SQL Server, these can include application errors, blocking locks, deadlocks, long running queries, and SQL instances that consumes excessive resources.

Monitoring CPU usage, disk usage, and SQL Server cache hit ratio are easy ways of making sure that a database instance is not consuming too many resources on the VM and the VM is not consuming too many resources on the physical host. If CPU usage exceeds 60 percent per processor for 80 percent of the time, additional CPU resource should be made available. A cache hit ratio of less than 98 percent for any extended period of time can indicate that no more databases should be provisioned on that VM. A cache hit ratio any lower than that indicates that restrictions should be placed on memory consumption or that you should consider placing the VM on a different physical server.

Database as a Service Reference Architecture Guide 28

Page 29: Database Hosting Reference Architecture Guide_SQL_2014.docx

6.14 System Center Operations Manager

System Center Operations Manager (SCOM) enables services, devices, and operations across many computers to be monitored from a single console. SCOM monitors health and performance, detects problems, alerts administrators that a problem has occurred, what is causing a problem, and the applications affected by the problem. SCOM uses SQL Server databases to store configurations and short-term monitoring data. It uses a data warehouse for storing and reporting on historical long-term monitoring data including event log data, performance counter data, and Windows Management Instrumentation data.

The SQL Server Management Pack for System Center 2012 R2 Operations Manager enables monitoring of SQL Server instances and databases. It can alert on errors, such as database files exceeding a threshold of space used, database file growth, long running jobs, blocking locks and deadlocks. SQL Server instances and databases that are consuming more resources and which ones might be good candidates for more stringent Resource Governor resource pool definitions can be identified.

Along with System Center Service Manager, System Center Virtual Machine Manager and System Center Operations Manager can all be integrated to create a chargeback system for more easily charging customers for what they use.

7 Tenant Database Management

Tenants should be responsible for managing their own databases. Contained databases and users allow them to carry on the tasks of the DBA and developers in the databases, such as create objects, assign permissions, write code for their applications, and add, modify, and report on data.

7.1 Importing Data

One of the initial challenges for you and the tenant will be how the initial data gets into a new tenant database. A backup from an on-premises database can be restored or the database structure can be scripted from its source and the data copied in bulk to the new destination.

There are also tools that may help, such as the SQL Server Migration Assistant. Administrative clients like SQL Server Management Studio can make the scripting and data transfer easier. The bcp, a command-line tool that uses the Bulk Copy Program API, and PowerShell allow the bulk copy to be scripted.

7.2 SQL Server Migration Assistant

The SQL Server Migration Assistant allows existing schemas in Oracle, MySQL, Sybase, and Microsoft Access to be mapped to a SQL Server database schema or converts the source schema to a SQL Server schema. Data can then be migrated from the source database to the SQL Server database.

7.3 MAP Toolkit

As you help your customer decide which databases are good candidates to be migrated to cloud hosting, suggest that she run Microsoft’s Assessment and Planning (MAP) Toolkit against her on-premises SQL Servers. This tool evaluates how resource-intensive a database is and its size to determine if it can go

29 Database as a Service Reference Architecture Guide

Page 30: Database Hosting Reference Architecture Guide_SQL_2014.docx

through self-service provisioning. If the database is of sufficient size and requires enough resources, analyze it further to recommend what the size of the virtual machine should be and on which physical server the virtual machine should reside.

The MAP Toolkit is a client that scans the network for Windows Servers, SQL Servers, and virtual machines. The MAP Toolkit also will discover Oracle, Sybase, and MySQL instances running on UNIX and Linux. The MAP toolkit inventories the servers and collects assessment data that it presents in various reports. You can use this information to determine the proper level of virtual machine resources, such as virtual CPUs, memory, and storage requirements.

7.4 SQL Server Management Studio

SQL Server Management Studio (SSMS) is the most common system administration and database administration client for SQL Server. It is installed through the SQL Server setup program, but it can be installed on a computer without the SQL Server database engine.

SSMS is used to manage SQL Server instances and databases. You may allow a tenant to use SQL Server Management Studio to manage their database as long as the tenant users have been secured as contained users. Because contained users are isolated inside the tenant database, SSMS will have limited functionality for the tenant. It can be beneficial to the tenant for tasks like creating and altering database objects, developing programmable objects, and for granting, revoking, and denying permissions.

7.5 Performance Tuning

Customers consolidating databases in the cloud may have fewer DBAs to handle the routine maintenance tasks in a database, including Update Statistics and reducing index fragmentation. Reducing index fragmentation reduces the number of jumps from page to page during a search.

Another common obstacle to the best performance on large tables is the columns indexed don’t match the columns in common searches. Scanning an entire table increases CPU time, disk IO, and pressure on buffer pool memory. Architecting indexes in anticipation of application requirements can improve performance. SQL Server 2014 Columnstore Indexes can greatly improve performance on some searches.

You may benefit your customers and improve overall performance by offering your customers assistance in architecting indexes for performance and the ability to schedule and run maintenance jobs. You will need to be aware of times of peak application workloads for all tenants sharing the SQL Server and the same storage. You can use resource governor pools to limit how much these maintenance jobs use.

8 Summary

Hosted service providers are giving their customers more choices and customers are eager to be a part of the acceleration in cloud computing. The advantages to pooling resources, not investing in on-premises hardware, and making more efficient use of shared compute, network, and storage resources are becoming more apparent all the time. Databases and supporting infrastructure can be more flexible in meeting demand using SQL Server 2014 on Windows Server 2012 R2 Hyper-V virtual machines. Using

Database as a Service Reference Architecture Guide 30

Page 31: Database Hosting Reference Architecture Guide_SQL_2014.docx

the System Center suite of applications, provisioning and tracking usage is easier. With these advantages, along with the lower total cost of ownership that multi-tenant database instances provide, Database as a Service provides a great opportunity for hosted service providers to build their business in new and exciting ways.

31 Database as a Service Reference Architecture Guide

The information contained in this document represents the current view of Microsoft Corporation on the issues discussed as of the date of publication. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information presented after the date of publication.

This white paper is for informational purposes only. Microsoft makes no warranties, express or implied, in this document.

Complying with all applicable copyright laws is the responsibility of the user. Without limiting the rights under copyright, no part of this document may be reproduced, stored in, or introduced into a retrieval system, or transmitted in any form or by any means (electronic, mechanical, photocopying, recording, or otherwise), or for any purpose, without the express written permission of Microsoft Corporation.

Microsoft may have patents, patent applications, trademarks, copyrights, or other intellectual property rights covering subject matter in this document. Except as expressly provided in any written license agreement from Microsoft, the furnishing of this document does not give you any license to these patents, trademarks, copyrights, or other intellectual property.

© 2013 Microsoft Corporation. All rights reserved.

The example companies, organizations, products, domain names, e-mail addresses, logos, people, places, and events depicted herein are fictitious. No association with any real company, organization, product, domain name, e-mail address, logo, person, place, or event is intended or should be inferred.

Microsoft, list Microsoft trademarks used in your white paper alphabetically are either registered trademarks or trademarks of Microsoft Corporation in the United States and/or other countries.

The names of actual companies and products mentioned herein may be the trademarks of their respective owners.