Showing posts with label SQL. Show all posts
Showing posts with label SQL. Show all posts

Sunday 29 November 2020

Azure SQL Basic Options Summary

OverviewAzure SQL is incredible.  There are a lot of options when choosing how to host database and performance good.  "handles patching, backups, replication, failure detection, underlying potential hardware, software or network failures, deploying bug fixes, failovers, database upgrades, and other maintenance tasks", from Microsoft Docs and Azure SQL.

Azure SQL is the PaaS database service that does the same functions as SQL Server did for us for many years as the workhorse for many organisations.  Microsoft initially only offered creating VM's and then installing SQL Server on-prem.   Azure SQL is Microsoft's PaaS SQL database as a Service offering on the cloud.  Azure SQL is a fully managed platform for SQL databases that Microsoft patches managed backups and high availability.  All the features that are available in the on-prem. Edition are also built into Azure SQL with minor exceptions.  I also like that the minimum SLA provide by Azure SQL is 99.99%.

Three SQL Azure PaaS Basic Options:

  1. Single Database - This is a single isolate database with it's own guaranteed CPU, memory and storage.
  2. Elastic Pool - Collection of single isolate databases that share DTUs (CPU, Memory & I/O) or Virtual Cores.
  3. Manage Instance - You mange a set of databases, with guaranteed resources.  Similar to IaaS with SQL installed but Microsoft manage more parts for me.  Can only purchase using Virtual Core model (No DTU option).
Thoughts: Managed Instances recommend up to 100TB but can go higher.  Individual databases under elastic pools or single databases are limited to a respectable 4 TB.

Two Purchasing Options:

  1. DTU - A single metric that Microsoft use to calculate CPU, memory and I/O.  
  2. Virtual Cores - Allows you to choose you hardware/infrastructure.  One can optimise more memory than CPU ratio over the generalist DTU option.
Thoughts:  I prefer the DTU approach for SaaS and greenfield projects.  I generally only consider Virtual Cores, if I a have migrated on-prem. SQL onto a Managed Instance or for big workloads virtual cores can work out cheaper if the load is consistent.  There are exceptions but that is my general rule for choosing the best purchasing option.

Three Tiers:

  1. General Business/Standard (There is also a lower Basic Level)
  2. Business Critical/Premium
  3. Hyperscale

Backups

Point in time backups are automatically stored for 7 to 35 days (default is 7 days), protected using TDE, full, differential and transaction log backups are used to point in time recovery.  The backups are stored in blob storage RA-GRS (meaning in the primary region and all the read-only backups are stored in a secondary Azure region).  £ copies of the data in the active Azure Zone and 3 read only copies of the data.

Long Term Retention backups can be kept for 10 years, these are only full backups.  The smallest retention is full backups retained for each weeks full backup.  LTR is in preview available for Managed Instances.

Azure Defender for SQL 

Monitors SQL database servers checking vulnerability assessments (best practice recommendations) and Advance Threat Protection which monitors traffic for abnormal behavior.

Checklist:

  1. Only valid IP's can directly access the database, Deny public Access,
  2. AAD security credentials, use service principals
  3. Advanced Threat Protection has real time monitoring of logs and configuration (it also scans for vulnerabilities), 
  4. Default is to have encryption in transit (TLS 1.2) and encryption at rest (TDE) - don't change,
  5. Use Dynamic data masking inside the db instance for sensitive data e.g. credit cards
  6. Turn on SQL auditing,

Note: Elastic Database Jobs (same as SQL Agent Jobs).

Azure offers MySQL, Postgre and MariaDB as hosted PaaS offerings. 

Note: The Azure SQL PaaS Service does not support the filestream datatype : use varbinary or references to blobs. 

Sunday 6 September 2020

Working with CDS data structures for non CRM types

Overview:  I am working on a Power Platform solution and I need to use CDS.  Basically, I need to be able to see and edit values within CDS.

Option 1: Microsoft SQL Server Management Studio (SSMS) version 18.6 allows connectivity and read-only access.  Here are the instructions.

Option 2XrmToolBox has fantastic tools for Dynamics and Power Apps.  There are a lot of individual tools from various contributors.

Here I am using "Entity Relationship Diagram Creator" to look at the relationships between the CDS entities.




Thursday 4 April 2019

Adding users to all new SQL database using Azure AD groups

Problem:  I have a dedicated SQL 2017 VM on Azure that is joined to my Azure AD tenant e.g. int.contoso.com (Azure AD Domain Service).  I need a set of users to have read and write access to all databases that get provisioned on the SQL 2017 instance.

Initial Hypothesis: 
Create an Azure AD security group and add all the AAD users and
Add the AAD group to the Model database with the permissions that all new database should have.

Resolution:
1. Using Azure AD create a new security group, I called my group developers and add the users as members Fig 1.& Fig2.
Fig 1. Azure Portal, go to Azure AD and Groups

Fig2. Add the security group

2. Add the AAD Group e.g. int.contoso.com\Developers to the System "Model Database", I have given the group read and write access below in Fig 3.
Fig3. Add permissions to the Model DB
3. Create a new database and validate that the new permissions are added to the new database as shown in fig4.
Fig4.
Note: Changing exsiting DB permissions
To add permissions to existing database, an option is to run
EXEC sp_MSForEachDB 'exec sp_addrolemember ''db_datareader'',''INT\paul.beck'''
T-SQL to list of Daatbase: EXEC sp_MSforeachdb 'USE ? SELECT SF.Name FROM sys.databases SF'


Monday 2 February 2015

Encrypting Content databases

Overview: TDE is Transparent Data Encryption, where you can encrypt your "data a rest", this encrypts the SQL mdf and ldf files.  Few enterprises require TDE for content database but if your customer has specific enterprise security requirements (Encryption at Rest for High Confidential data) or compliance requirements such as SOX, HIPAA, or Payment Card Industry Data Security Standard (PCI DSS) TDE may be an easy win.
Notes:
  • TDE is only available from SQL 2008, 2012 and 2014 Server Enterprise Edition.
  • SP Blobs are stored outside of mdf so they are not encrypted by TDE.
  • Only Content databases can be encrypted (not verified).
  • Search indexes are obviously not encrypted by TDE.
  • Encrypting the Connections to SQL or IPsec is needed to encrypt data between SP and SQL, not covered by TDE).  Nor are any call to web services or data in transit, use SSL.
  • TempDB is encrypted even if only 1 db is using TDE.
  • Applies to SP2013 On-prem. farms only.
  • I believe O365 uses BitLocker.
  • Vormetric and also offer encryption at Rest on SQL and other databases.

More Info:
Storage and SQL Server capacity planning and configuration (SharePoint Server 2013)
http://www.slideshare.net/michaeltnoel/transparent-data-encryption-for-sharepoint-content-databases
http://www.vormetric.com/search-results?query=SharePoint
http://web.townsendsecurity.com/bid/64783/4-Ways-to-Encrypt-Data-in-Microsoft-SQL-Server


Thursday 2 January 2014

Understanding SQL backups on SharePoint databases using the Full recovery mode

Overview:  This post looks at reducing the footprint of the ldf file.  SharePoint related databases with using the Full recovery mode keep all transaction since the last "differential". It explains how SQL is affected by backups such as SQL backups, SP backups and 3rd party backup tools (both SP backup and SQL backup tools).

This post does not discuss why all your databases are in full recovery mode or at competing backup products.  It also contains steps to truncate and then shrink the size of the transaction log.

Note: Shrink a ldf file for it to regrow each week/cycle is bad practice.  The only time to shrink is when the log has unused transactions that are already covered by backups.

Background
  • Using the full recovery model allows you to restore to a specific point in time.
  • A Full backup is referred to as your "base of the differential".
  • A "copy-only" backup cannot be used as a "base of the differential", this becomes important when there are multiple providers backing up SQL databases.
  • After a full backup is taken, "differentials" differential backups can be taken.  Differentials are all changes to your database since the last "base of the differential".  They are cumulative so obviously, they grow bigger for each subsequent differential backup until a new "base of the differential" is taken.
  • To restore, you need the "base of the differential" (last full backup) and the latest "differential" backup.
  • You can also back up the transaction logs (in effect the ldf).  These need to be restored in sequential order, so you need all log file backups.
  • If you still have your database you can produce the "tail-log" backup this will allow you to restore to any point in time.
  • Every backup get a "Log Sequence Number" (LSN), this allows the restore to make a chain of backup for restore purposes.  This chain can be broken using 3rd party tools or switching the database in the simple recovery mode. 
  • A confusing set of terminology is "Shrinking" and "Truncating" that are closely related.  You may notice an ldf file has got extremely large, if you are performing full backups on a scheduled basis this is a good size to keep the ldf at.  You don't want to grow ldf files on the fly as it is extremely resource intensive.  However say your log file has not been purging/removing transactions within a cycle, then you may have a completely oversized ldf file.  In this scenario, you want to perform a full backup and truncate your logs.  This remove committed transactions but the unused records are marked an available to be used again.  You can now perform a "shrink" to reduce the size of the ldf file again, you don't want ldf's growing every cycle so don't schedule the shrinking.
  • "Truncating" is marking committed transactions in the lfd as "free" or available for the writing new transactions too.
  • "Shrinking" will reduce the physical size of the ldf.  Shrinking can reclaim space from the "free" space in the ldf.
 The process to Shrink the transaction log files:

1.> Determine which databases are suitable candidates for shrinking the log file. 
DBCC SQLPERF(LOGSPACE);


2.> Perform a Full backup,
3.>  Next perform a transaction log backup and truncate the database.
 4.> Run DBCC ShrinkFile as shown below (please remember to leave growth so the ldf file is not growing, keep extra room in the ldf - this should be used to reduce the size log file that has grown way to far).  The example below will leave my transaction log at 100MB.
USE AutoSP_Config;
GO
DBCC SHRINKFILE (AutoSP_Config_log, 100);
GO

5.> Verify the ldf file has reduced in size.

Update 2014-02-05: I have Always on my separate SharePoint Reporting Services database.  The SP_RS_ServiceTempDB should be excluded from the availability groups.  Keep it in the simple recovery mode.  The logs grow extremely qu8ickly and it does not need to be in Full recovery mode.

Update 2014-04-25: Always on databases shrinking does not require you to remove the database from the AOAG, you can merely perform a full backup on the primary and then shrink the database. Note: Watch the backup chains you can break them especially if you have a 3rd party backup tool on SharePoint.

Update 2019-04-18:
Change a database from Full recovery model to simple using T-SQL and shrink the ldf file
Use [LiveDB]
            GO 
           ALTER DATABASE [LiveDB] SET RECOVERY SIMPLE WITH NO_WAIT
           GO 
           DBCC SHRINKFILE ('LiveDB',10) 
           GO 

Normal process is to change the db as follows 1) Set Simple Recovery 2) Shrink the Ldf (remember to size so it can handle a full backup cycle) 3) Set Full Recovery  4) Perform a Full backup (start the SQL backup chain again) 

Monday 21 October 2013

SQL Server 2012 for SharePoint 2013 checklist

 Checklist for SQL Server 2012 for SharePoint 2013
  1. Use multiple: SQL Aliases (separate 1 for search).
  2. Dedicate SQL Server for SharePoint.
  3. Set max degree of parallelism (MAXDOP) to 1 for instances of SQL (SP will do this when SP is installed).  Number of processes for each SQL statement.
  4. Mixed mode authentication - Don't install SQL 2012 for SP in mixed mode auth unless you have good reason (the only reason I have heard of is from Todd Klindt's podcast that mentions Access Services needs to use the SA account.  If you have other database that need SQL permission access consider moving them to a dedicate SQL instance.
  5. SQL Server 2012 AlwaysOn Availability Groups are a new high availability and disaster recovery solution that are an alternative to database mirroring and log shipping solutions. AlwaysOn Availability Groups support a set of primary read-write databases and up to four sets of secondary databases that can be set as read-only.
  6. Memory: You can set the max memory each SQL instance can use.  If the machine is dedicate to only provide SQL for SharePoint, the max setting is total memory minus 4GB for the OS.  See image 3.
  7. Model DB: Increase initial size and autogrowth settings - fix growth sizes.  I would start  with 100MB for the mdf and 20MB for the ldf for initial sizes.  Autogrowth on content db's I start with 50MB growth for the mdf and 25MB for the log file (ldf).  See image1 below.
  8. Model DB: Don't modify DB Collation after install.
  9. Model DB: Use Full recovery model on the Model system database - Simple prevents large log files. 
  10. Avoid giant ldf log files ... (don't use DCC_Shrink to resize ldf files with switching to the simple recovery model, it breaks the LSN/log backup chain).  ldf growth is far more resource intensive than mdf files growth, the problem I see with Content db growth is the IT pro lets the ldf get out of control, then backs up and shrinks the database.  Usage causes the ldf to autogrow periodically and the farm goses back to needing the process repeated with heavy growth issues.  Key is to ensure the lfd has a decent initial size (you can work this out between full backup cycles), the ldf for content db's should rarely need to autogrow and when it does make it a fixed amount.
  11. TempDB: Having multiple mdf tempdb files speeds up SQL performance.  The tempdb is a system db that has resource available to all users.  And from the expert
  12. TempDB: Increase it's initial size  & autogrowth to MB as opposed to percent (see image2).
  13. TempDB: Simple recovery model for TempDB is correct.
  14. TempDB: The default is 1, you need more than this depending on how many CPU cores are on your database server.  1 option is set the number of TempDB's to the same as the number of CPUI cores (1-to-1).  Some folks recommend the number of tempDB's should be 1 less than the number of cpu cores, other folks go for 1 TempDB per 2 CPU cores.   I start with 4 and tune in performance testing or once it is running.  Saying that I normally have 16 cores.  I can't see performance gains from my testing after 4 TempDB's but as a rule I'd start with 1 TempDB for each 2 Cores and tune from there. 
  15. TempDB: Move the tempdb.mdf files and the TempDB .ldf file to their own fast as possible drive.
  16. Content DBs: CA or PS when creating a new content db won't take all the model settings, it does take initial sizes but not the autogrowth settings.
  17. Content DBs: Workaround- create the db outside of PS, get the SP collation right "Latin1_General_CI_AS_KS_WS" collation.
  18. If your SQL db uses spinning disk split the mdf and ldf files onto separate disks.  Order the db files as follows: tempdb must be on the fastest disk, content db log files next and content db's next.
  19. Change the default backup location to a separate disk (pretty obvious but it is the default setting). 
  20. Set the default for the database instance file locations  - set the default location where new mdf, ldf and backups will go on disk (per your fasteset disk calcs).
  21. Set the default for the database instance backup compression - I'd go with compression for all backups.
  22. mdf and ldf should be on separate drives for 2 reasons : IOPS speed (provided this is spinning disk) & DR (you don't want to loose both).
  23. OS: NTFS allocation unit, by default on Windows 2008 this is 4096 byte (4kb), generally much faster to have it set to 64kb allocation unit size.  e.g. cmd>Chkdsk c: 
  24. Use RAID 10 where possible.
  25. Windows firewall - if using it you will need to open the incoming SQL port i.e. 1433
  26. Avoid huge transaction logs and size them appropriately.  Pref don’t use simple recovery model. Ldf content is not removed every 60 seconds when it is written to the mdf files. Backup – get last full backup and last differential to get you to the lasted backup version.  Or get the current ldf, restore the last full backup and play the current ldf through the db.  To slim the ldf down, after a successful full backup, can backup the transaction with "truncate the transaction log" (it zeros all transaction before the checkpoint made by the transacion log backup or (sic. delete the log file) to get it back to a reasonable size.  Hint: BACKUP LOG databasename TO devicename
  27. Watch the size of the Content Databases, they take time to recover.  Max up to 4TB, try stick to around 200GB (exception will be for blob storage).  This makes backup and restore quick however AlwaysOn also changes the scenario.
  28. Format the Drives with 64K NTFS Allocation Units.
  29. Antivirus software must exclude LDF/MDF/NDF files.
  30. Don't shrink database log files by switching the recovery model to Simple.
  31. Ensure you are within the latency recommendation for SP to SQL (< 20ms).
Image 1. Change the model database initial size and autogrowth settings.
Tip: AutoGrowth of SharePoint 2013 Content databases - Changing the initial size of the model db will affect the content db's - nice, issues is that the autogrowth settings in the model db are not pushed to the content databases created through SharePoint (either CA or PowerShell).
Image 2. Change the TempDB to have multiple mdf files.


Image 3. Setting Memory on SQL Server instances.
SharePoint Sizing Starting point notes:
SP_Config database - "Transaction log files. We recommend that you back up the transaction log for the configuration database regularly to force truncation." Technet.   The Full recovery model is the default, switch this to Simple.  If you need Full, your ldfs be busy.  Suggest ldf at least 1000MB per day growth, can be a lot more.

Suggested Search database sizing.  If the Search databases are in the Full Recovery mode you also need to set the ldf sizing.
Database mdf mdf growth ldf ldf grow
SP_Search_Admin 100 MB 10 MB 100 MB 50 MB
SP_Search_CrawlStore 100 MB 50 MB 300 MB 100 MB
SP_Search_AnalyticsReportingStore 100 MB 50 MB 25 MB 25 MB
SP_Search_LinksStore 100 MB 50 MB 25 MB 25 MB

Update 23 Jan 2014:  Todd Klindt has a good set of blog posts on SQL 2012 for SharePoint 2013.

How do ldf files work with mdf in SQL Server:
Content goes into .ldf file temporarily, checkpoint occurs every minute and moves from .ldf to mdf.  If the "Full recovery model" is used the content in the the ldf file is retained. Hence large trans logs but recovery is better.  If a simple recovery model is used, the ldf data is dumped.


Keith Tuomi provide the code to automatically change the autogrowth sizing.
01$Server="SQLSRV2012"
02[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null
03$SMOserver = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $Server
04$databases = $SMOserver.Databases;
05foreach ($DB in $databases | where{$_.Name -like '*Content*'}) {
06 #Set Log File growth
07 foreach ($DBLF in $DB.logfiles) {
08 $DBLF.set_GrowthType("KB");
09 $DBLF.set_Growth("51200"); #50mb
10 $DBLF.Alter();
11 }
12 #set File Growth
13 $DBFG = $DB.FileGroups;
14 foreach ($DBF in $DBFG.Files) {
15 $DBF.set_GrowthType("KB");
16 $DBF.set_Growth("102400"); #100mb
17 $DBF.Alter();
18 }
19 }

SQL Licencing:
There are numerous licensing models available to SQL server on the different versions and I find them extremely complex. For large SQL using the Enterprise Edition of SQL 2012, per-core licensing at the hardware (hypervisor) level is an option. The SQL instances can be tied to specific hardware. Affinity rules do need to be setup to prevent vMotion moving the VM to another hardware host.  In a HA situation using AOAG in a passive situation, the secondary SQL servers et al. will require licencing.

SQL Installation:  I slipstream and automatically install SQL Server 2012.  The checklist below lists the SQL features you can install.  Determine your needs makes creating the config.ini files used by the install much easier to do.  The example below is used to create both my primary SQL Server and the secondary (AOAG) server, they are identical.  The choices are pretty standard, you may want to move the Reporting Services features to another server or remove if you are not using them.

 Feature  Install
Database Engine Services Y
SQL Server Replication Y
Full-Text Search N
Data Quality Service N
Analysis Services N
Reporting Services: Native N
Reporting Services: SharePoint Y
Reporting Services Add in for SharePoint Y
Data Quality Client N
SQL Server Data Tools N
Integration Services Y
Client Tools Connectivity Y
Client Tools Backward Compatibility Y
Client Tools SDK N
Documentation Components N
Management Tools Basic Y
Management Tools Complete Y
SQL Client Connectivity SDK N
Master Data Services N
Distributed Replay Controller N
Distributed Replay Client N


More Info:
http://yalla.itgroove.net/2013/03/sql-server-powershell-sharepoint-set-autogrowth-on-content-dbs/
http://blog.cloudshare.com/2013/08/28/how-to-use-the-same-autogrowth-value-for-sharepoint-content-databases/
http://technet.microsoft.com/en-us/library/hh292622.aspx
http://channel9.msdn.com/Series/Tuning-SQL-Server-2012-for-SharePoint-2013/Tuning-SQL-Server-2012-for-SharePoint-2013-03-Server-Settings-for-SQL-Server (Excellent)
http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-1230-tempdb-should-always-have-one-data-file-per-processor-core/
http://www.brentozar.com/blitz/blitz-result-percent-growth-use/
http://www.brentozar.com/archive/2008/03/sql-server-2005-setup-checklist-part-1-before-the-install/
http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/
http://www.toddklindt.com/default.aspx
SharePoint database types and description

Thanks to:
@allanSQLIS - Allan Mitchell - great sitting next to a SQL expert.

Steve Goodyear has a blog post on Farm Install and build guide.  I haven't used it but it is a good post to check you are ready for your install and you have done the big steps.

SQL Hardening:

From http://blogs.technet.com/b/rycampbe/archive/2013/10/14/securing-sharepoint-harden-sql-server-in-sharepoint-environments.aspx
Hardening SQL Server is done in a 3 phased approach:
  1. Encryption at Rest (Encrypt the data sitting on the hard drives)
  2. Encrypt Connections (Encrypt the data in flight on the network between servers)
  3. Server Isolation (Configure SQL Server's firewall to ignore requests from unauthorized servers)
Transparent Data Encryption (TDE) can be used to encrypt any SharePoint database.  This will encrypt the mdf and ldf files, this ensures that even is the hard disk storage is comprimised, the mdf and ldf cannot be used to restore the databases using the SQL restore tools.  There are a lot of ramifications to using TDE so review the decision to use TDE carefulkly before implementing.
 

Tuesday 8 January 2013

Create SQL Server Aliases using Powershell


Create SQL Aliases example Powershell
For DR and Moving/splitting up SQL Server load use aliases, costs you nothing and later on you can split the load.  I use 3-4 even on small SP farms.

Tip: SQL 2012 has always on availability clustering, the SQL Server listener (need for Availability Groups (AG)) does the same functions as a SQL Alias.  So my take is if you use a SQL 2012 AG then the listener on an always on availability cluster does the same function as the SQL Alias.  Obviously rather use the listeners DNS name as opposed to the IP adr of the listener but if you are using AG you don't need a SQL Alias.

Thoughts: SQL 2012 brings a new option to the table regarding SQL Aliases for SP2010 & SP2013.  If you are using Always-on Availability Groups (AP) in SQL 2013, you get a SQL listener that does the same function as as the SQL Alias.  AG gives you automatic db fail over for your Sp farm.  Issue is if you use AG with a SQL alias you have a single point of failure so your DB won't automatically fail over.

So the big reason to use SQL Aliases for me in the past was to allow me to split my database servers when 1 became the bottleneck.  The goodness with AG outweighs this option to improve performance especially as if I'm using AG I probably have sufficient resources as this is planned upfront.

Creating Registry keys safely in PowerShell:
    # Check if the key already exists - Example from AutoSPInstaller on creating aliases.   
    $client = Get-Item 'HKLM:\SOFTWARE\Microsoft\MSSQLServer\Client' -ErrorAction SilentlyContinue
    # Create the key in case it doesn't yet exist
    If (!$client) {$client = New-Item 'HKLM:\SOFTWARE\Microsoft\MSSQLServer\Client' -Force}



Tip:  Check SQL connections and SQL Aliases using a udl file.  Create a text file on your desktop, rename the .txt extension to .udl.  Open the UDL file and verify the connection string works.  I check the Alias that uses the AOAG listener, if this fails I check the connection using the listener, if this fails I check I can hook to any SQL instance.  This pretty much tells me where I have gone wrong.

Tip:  Review your SQL Alaises and cleintside neworking using the SQL Server Client Network Utility tool.  In the run window type: cliconfg

Friday 26 October 2012

SQL 2012 Slipstreamed Installation

Overview:  I like automation.  I use AutoSPInstaller for SharePoint even on developer VM's and taking this further I want to automated SQL installations.  This post explains how to installed SQL Server 2012 on a domain controller.  Sure it's not a good idea but I want a standalone development machine for SharePoint 2013 Preview.

 Once SQL Server 2012 is installed via the PowerShell completing, verify SQL is accessible.
 Thanks to Wayne Senior for the PowerShell scripts.

Wednesday 22 June 2011

SQL Name Pipes Error

Problem: When setting up a development machine, I install SQL before SharePoint.  When I try access SQL Server 2008 R2 using "SQL Server Management Sudio", I get the error "provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server".

Initial Hypothesis: Named pipes are not enabled by default since SQL Server 2000, need to enable Named Pipes usinng "SQL Server Configuration Manager".

Resolution:
All Programs >> Microsoft SQL Server 2008 R2 >> Configuration Tools >> SQL Server Configuration Manager >> Enable both “TCP/IP” and “Named Pipes”.


More Info:
http://blogs.msdn.com/b/sql_protocols/archive/2007/03/31/named-pipes-provider-error-40-could-not-open-a-connection-to-sql-server.aspx

Thursday 20 January 2011

SharePoint for small companies - A Small server farm solution

Overview:  Today I met a small company looking to put in SharePoint 2010, I was surprised at the requirement for 35 users but after some review I realised such a small user base provided they are heavy usage information workers could get great benefits out of SharePoint.  BPOS is to limited and I think SharePoint 365 will be a better option but as it still in restricted public beta and therefore it's not an immediate option. 
Problem: The client needs a solution for an Intranet, collaboration and replacement of drives, phase 2 would be a couple of applications such as a new simple CRM and MySites.
Initial Hypothesis:  I refuse to put SQL on the same server as SharePoint.  Using Windows 2008 R2 I want to run SP2010 on the virtual machine, I would put SQL Server 2008 R2 on the physical machine.  This allows for a fairly easy upgrade and the ability to backup and restore data the farm.  Only 1 windows standard licence is required for the VM and the physical SQL machine.
SQL Server can be licenced in 1 of 3 methods namely: 1) per processor (most common), 2) per server plus user cals or 3) per server plus device cals.  If you use option 2, you can use multiple processors and get performance increases.  However, you need to buy additional CAL's at +-£85 each as new users join.  SQL Server Standard Edition is preferable to the SQL Server 2008 Express edition.
SharePoint 2010 needs to be for internal users only, the question is between using the paid for Standard edition or Foundation server.
Resolution: SharePoint standard edition has features that will help the client however they are leaning towards SharePoint 365 ASAP.  I recommend 1 single server containing 8GB of RAM x64 that supports hardware virtualisation.   Windows 2008 R2 standard edition will be installed on the physical server.  A single virtual machine hyper-V instance will spawn up for the SP2010 software.  SP2010 foundation will be used in the discovery phase with 4GB of dedicated RAM.  The SQL Server database could be the express edition but as the threshold is 10GB per database, really to small so the web or workgroup SQL Server editions are better but will have additional licence costs associated.  Standard edition is more common however, you do need Enterprise edition for features such PowerPivot, remote blob storage, backup compression.

Summary: You get no resilience on the small farm's discussed but there are options and each can easily be built upon at a latter stage. This approach also lends itself to disaster recovery using VM image, Acronis base image and SharePoint farm backup.

More Info: Updated 23 March 2011
SP2010 edition comparison for search
SP2010 edition comparison for composite/applications dev
Licencing calc tool

Wednesday 15 December 2010

SharePoint 2010 boundries and thresholds

I attended a suguk.org event in London about a week ago.  John Timney did the 1st presentation session and asked a couple of questions on SharePoint limits.  I didn't know the answers, tried to think back to MOSS and what I'd seen previously.  The simplest question that I should know the answer to:

Qu: What is the maximum content database size supported by SharePoint 2010?
Ans: Microsoft supports Content databases up to 200GB in size.  In MOSS it was 100GB.  It is fairly common to see content databases considerably bigger than 100GB in MOSS that work.   The issue is how long does it take to perform operations on these content DB's such as backups moving content db's.  If you have a dedicated SAN, there is no reason not to go to much larger content databases however, they are not supported by MS.

More info on SharePoint's boundaries and thresholds from MS

Qu: What I/O speed does MS recommend for your SharePoint 2010 SQL database?
Ans:  I/O operations per second (IOPS).  The faster that SQL can handle request, results in faster return time and reduced que requests, so pretty important and a fairly common bottleneck.  This is often a reason why people choose not to virtualise SQL Server, it I/O intensive in SharePoint and really important to be fast.  Tip: Ensure VM's are thick provisioned for SQL Server. 

To determine you IOPS  use SQLIO Disk Subsystem Benchmark Tool (http://go.microsoft.com/fwlink/?LinkID=105586).

I guest the answer is as fast as possible but you can determine your IOPS requirement using the tool and you usage.  I go with ldf files on the fastest disk on the TempDB followed by ldf files for the content dbs on spinning disks.

Update 09/06/2011
Qu: Should I using seperate disks for mdf (data files) & ldf (transaction logs)?
Ans:  On small SQL server farms ensure that the transaction logs are stored on a different physical drive to the content databases as this will reduce contention and increate performance signigicantly.  Larger SQL instances like SANS have multiple disks so there is no need to seperate the files as this is already done by the nuber of disk readers.  You can also check the performance of a drive by watching the "disk seconds per read/write counters" which should be less than 20ms.  If the disk seconde per read/write is approachiing 20ms consider improving the disk speed or increasing the number of read points.
Update 22/08/2012 - Bigger architectures may use SSD/Flash memory as opposed to disks.  The IOPS are hugely improve as the is no disk search time.  http://technet.microsoft.com/en-us/library/cc298801.aspx#Section1_5a

Qu: What is the default SQL Server database growth setting sizes?
Ans: SQL Server 2008 will grow data files by 1MB and transaction logs by 10% increments.  I would start with an initial content database size of 100MB(adjust according to your anticipated demand) and autogrowth to be 50MB (adjust according to your system).  This general prinipal will result in the growth to the db's being infrequent so the associated performance hit is reduced, unused space being optimised as the percentage growth in the transaction log has huge incremental hit that are generally never reached after initial growth and less fragmented databases results in faster performance. 

More Info:
Summary of limits and thresholds
http://blah.winsmarts.com/2010-5-How_big_can_my_SharePoint_2010_installation_be.aspx
SQL Checklist for SharePoint 2013

Tuesday 15 June 2010

Finding Your Default File Locations in SMO

Alan White's article helped me figure out where my default databases would be created.
To open SQL Powershell type "sqlps" in the run prompt.

==============================================

Update: 2012/12/20
Problem: After creating a udl file to check my SQL connection to the database which failed I realiased the SQL 2012 install done by the DBA was not on the default port 1433.  And it was set to dynamic.

Resolution: Change the TCP/IP network configuration as shown below

 

SQL Server for SharePoint 2010 notes

  • SharePoint Server 2010 needs 64-bit SQL Server 2008 SP 1 CU2 (Cumulative Update) or 64-bit SQL Server 2005 SP3 CU3.
  • Determine you storage requirements
  • SQL is I/O intensive, to improve this get fast disks and use multiple disks & disk controllers.
  • Use a SAN if possible, the physical hardware with multiple disks that are RAID 10 (Stripped & mirrored) preferable with C:\ drive for programs, d:\ for data and e:\ for logs.
  • Don't virtualise SQL Server unless you are a virtualisation expert and can get extremely high IOPS.
  • Search db's can also be broken into their own drive/disks.
  • Build the database with hardware redundancy (NIC, controllers, RAID).
  • Use SQL Server 2008 R2 Enterprise edition if possible.
  • Index's will be about 25% of the size of your database storage.
  • 8 GB is the minimum amount of RAM for a DB in production, 16 GB is more comfortable and for large farms or big site collections 64GB is a good guideline. SQL Enterprise Edition (EE) can support up to 2 Terabytes of RAM.
  • Don't install any other software on the database as you want maximum I/Oa nd the DB server/s should be locked down.
  • SQL guideline is 4 SP2010 server for each SQL machine.
  • Use multiple Data files for Content databases (Distribute data across disks, faster backup and restore). Try keep data files roughly similar in size & usage.
  • Large files can be store in Remote Blob storage (RBS), saves db and can be cheaper on disk space. Cheaper storage can be used to hold large blob data such as cloud storage. Blobs can be stored locally using the files stream using all versions of SQL, for remote storage EE is needed (Not sure what this means).
  • Pre-grow data & log files, faster than doing it on the fly when the system is over utilised.
  • Try keep 25% of db space free for growth in peak times.
  • Monitor SQL Servers including hardware counters.
  • SQL Database Mirroring is greatly improved.
  • Use backup compression on SQL 2008 (2005 not supported) is backup size is an issue. I/O is improved for the backup process.
  • SQL 2008 offers improved clustering.
  • Mirroring or Clustering is a good resilience option.  Review for HA.
  • Use Windows authentication not mixed mode authentication.
  • Use throttling if SQL is under load.
  • Transparent Data Encryption is supported in SQL 2008 EE & SP2010, there are costs but security is much better.
  • Failover clustering is still available is you use Standard or EE of SQL 2008.
  • Clustering and mirroring are good options for High Availability (HA) select appropriately for your network and knowledge.
  • Install SQL Server using a domain account.  The windows service account needs no permissions but is needed for advance SQL features as opposed to using built in accounts or local accounts.
  • I tend to use IP adrs for point to SQL Server, the netbios name also works and can be easier in the event of a SQL Server disaster.  For really good availability use a SQL Alias, it takes more setup time but if you loose your SQL box you will be glad you did it as you can switch over to another SQL box quickly.
  • Mirroring is a good option for HA.  Backups can be performed in various ways ensure you select the appropriate backup strategy. 
  • Max Degree of Parallelism (MAXDOP) should be set to 1. This can be found on the SQL Server instance properties > Advanced > Parallelism > Max Degree of Parallelism. Or run the T-SQL SELECT value FROM sys.configurations WHERE name = 'max degree of parallelism' SP2013 tries to reset MAXDOP during installation. 
  • AUTO_UPDATE_STATISTICS & AUTO_CREATE_STATISTICS should be disabled in SP2010.  More Info.
  • Use the default SQL Collation (Latin1_General_CI_AS_KS_WS), a good reason why SharePoint farms should have their own SQL Server instance.
  • Full backups should clear down the transaction log, if the transaction log is not cleaned up, perform it manually after you have checked the SQL backup of the db is valid.
  • Incremental backups are cumulative i.e. they go back to the last Fullback up not the last incremental backup.
  • Don't let transaction logs grow continuously, perform full backups periodically followed by taking a transaction log backup that truncates the log to remove/zero unused transactions.
  • SQL 2008 Developer Edition is the equivalent of SQL 2008 Enterprise Edition.
  • SQL Server 2008 R2 is the best option if you can choose.
  • To determine SQL edition in SQL Management Studio run
SELECT SERVERPROPERTY('productversion'), SERVERPROPERTY ('productlevel'), SERVERPROPERTY ('edition')
  • SQL Server 2008 = 10.0.1600.22 needs cumulative update 2 for SP1. Update - 12/10/2010 or SP2
  • SQL Server 2008 SP1  = 10.0.2531.0 needs cumulative update 2 for SP1.
  • SQL Server 2008 SP1 + CU2 = 10.0.2714.0
  • SQL Server 2008 + SP2 = 10.0.4000.0 Update - 12/10/2010
  • SQL Server 2008 R2  =  10.50.1600.1 Update - 12/10/2010
  • SQL Server 2008 R2 SP1 = 10.50.2500.0 Updated - 26/07/2011
I also telnet from each of my SharePoint servers to SLQ Server before I install to ensure networking is working and that SQL Server is available.

SharePoint DB's create:
SPF2010: 1) Configuration database (DB), 2) CA, Content DB's (multiple for site collections, 3) 1 content db stores 1 or more site collections data, Best Practice is to limit content db's to 200GB), 4) Usage and Health Data Collection database (farm health & usage info), 5) Business Data Connectivity database, Application Registry database(BDC in MOSS used for historic reason) & 6) Subscription Settings database.
SPS2010 Std Ed: 1) Secure Store database (stores & masps credentials), 2) State database (State info used by forms server, info path & visio services), 3) Web Analytics Staging database, 4) Web Analytics Reporting database, 5) Search service application Administration database, 6) Search service application Crawl database, 7) Search service application Property database, 8) User Profile service application Profile database, 9) User Profile service application Synchronization database, 10) User Profile service application Social Tagging database, 11) Managed Metadata database, 12) Word Automation Services database
SPS2010 Ent Ed: 1) PerformancePoint service application database, 2) Project Server 2010 databases, 3) Published database, 4) Archive database, 5) Reporting database, 6) ...
More info:
http://technet.microsoft.com/en-us/library/cc990273.aspx
Determine the SQL Server version installed
SQL Version no's
SQL DB info for SP 2010 - db's created
Updated 15 Dec 2010 - Database Maintenance for SharePoint 2010 by Matt Ranlett, Brendon Schwartz
Updated 11 May 2011 - Nice simple article on the SP2010 database's by Bert Jan van der Steeg
Updated 24 May 2011 - SQL Server mirror is either Synconous (hot standby for HA) or Asynconous (for DR).  Mirroring requires Enterprise edition and standard edition support is limited.  Clustering is normally done in the same server room whereas Mirroring is done on a remote site, the distance is dictated by the speed of the connection.
Update 11 Aug 2011 - Set the appropriate recovery model for your SP2010 databases.
Updated 28 May 2012 - SQL Best Practices for SharePoint 2010
Update 13 August 2013 - Best Practices for SQL Server in a SharePoint 2013 Farm - In SP2013 still ensure MAXDOP is set to 1.  Note:  During the SPInstall SQL will make this change if it has permissions to do this.

Moving a site collection to a new database

Problem: IT department created my development machine using a base image with a 20GB C:\ drive. The company insists I use SQL Server 2010 express for development. I installed SQL Express onto the C:\ drive. The databases are all stored on the same c drive along with the Windows footprint of 13GB. Very quickly I ran out of space.
Initial Hypothesis: When creating content databases thru Central Administration (CA), SharePoint will use the SQL default file location for *.mdf and *.ldf files. Therefore I need to change the location where the data and log files will be setup thru the UI.
I have a large D:\ drive so I should move the data and log files to the D:\ drive. I need to:
  • Change the default location of the files to the D:\ drive for data\log files;
  • Create a new content database to host the existing site collections; and
  • Move the existing data (site collections) to the newly created content database.
Resolution:
The base image is causing issues, this couple with me putting the default location on my C drive and the inability of this environment to resize virtual machine drives I had had to use the resolution below.

1.> Change the default location using T-SQL (I'm sure there is a better solution using Power Shell for Windows using SMO);
SQL Server Management Studio


2.> Open Power Shell (PS) for SharePoint



PS> Move-SPSite -Identity http://mysharepointsite.com.au/sites/user -destinationdatabase WSS-Content-NewUserDB


Your should work, I will solve the SQL permissions issue in my next post.

More Info:
SQL default location info

Moving site collections using PowerShell or move the entire Content DB