Wednesday 30 October 2013

Generating WSP's in CI for SP2013

There are some comprehensive blog posts on setting up Build automation.  This post is a short summary of creating SharePoint wsp files from a Visual Studio 2012 solution within TFS2012.

Prep Steps:
  1. TFS2012 containing source files.
  2. Need a build controller and a build agent, this can be on the TFS server or a separate server, in my case it is separated out.
  3. On the Server holding the build controller, you need the SP binaries.
  4. Install "Microsoft Office Developer Tools for Visual Studio 2012 - RTM"
  5. Create a new template or clone the default template and adjust it as shown below (You will need the VS extension (VSIX) "Team Foundation Server PowerShell Tools 2012" to clone an existing Build Template):
 
Add the Packaging switch under the advanced Tab in the Process tab

More Info:
http://msdn.microsoft.com/en-us/library/ff622991.aspx

Saturday 26 October 2013

The Developer Dashboard in SharePoint 2013

Overview:  The Developer Dashboard is a great tool added to SharePoint in the 2010 version.  SharePoint 2013 has additional features and a new look.  This post explores the new developer dashboard.
SharePoint 2013 Dev dashboard, note the new tabs that have been added.
 







Wednesday 23 October 2013

Performance Testing SharePoint 2013

Performance can be broken down into many aspects, in this post I examine my approach to verifying SP performance.

Added 2021: Breakdown for types of testing used for a SaaS solution setup on Azure PaaS.




Added 2015: Marcel De Vries Performance Testing Session at Dev Intersections Conference Amsterdam 2015
Marcel De Vries breaks it down into 4 forms of Performance Testing:
  • Performance Testing
  • Load Testing
  • Stress Testing
  • Capacity Planning
Note: Baseline each test type so that these can be compared to deployed solutions or application.

Fiddler:  I create a publishing site and use fiddler from as close to the server with a minor adjustment so I can easily see the TTL.  You need Fiddler and make this adjustment (alternatively you can see this in the page response anyway).  This tells me how quickly my page is being returned, you can also use this to prove if network connectivity is your issue.  For instance a remote site in Asia regardless of request may have poor connectivity.

PageSpeed Insights:  This is a Google service to check the speed of your public website and provides a nice summary of items you can optimise.

Developer Dashboard: Brilliant tool and even better in SP2013.  You need to enable the developer dashboard.  You can also add custom custom monitored scopes that allow you to dive down into bottlenecks using the dev dashboard.

Visual Studio Web Tests:  Available in VS2010 and VS2012, these allow you to record browser actions, replay them and validate responses. 
  • Create a New Visual Studio Project of Type "Web And Load Test Project", The little red icon allows you open the web browser and record the browser activity.  I am using VS 2012 SP3 Ultimate Edition.
  • The images below show a simple recorded web test and it being run against individual WFE's and a load balancer.


Visual Studio Load Tests:  Add web tests or other tests together and create a load using TFS.  You can simulate load and add multiple agents to test your farm or code works under stress.  Integration, Ocded UI and Unit tests can be added to load tests but this can get fairly tricky so I find it easier to simulate load using combinations of web tests.

To create a load test, you need 1 or more web tests in the Load and web test project.  Right click the project and create a new load test.  Follow the wizard thru as shown below.
Run the Test and watch the performance counters on the SharePoint web, app and DB servers.

You can export the results into excel that provides detailed information about the load test.

Read More:
http://msdn.microsoft.com/en-us/library/vstudio/jj710162.aspx
http://blog.sharepointsite.co.uk/2013/07/identifying-page-load-times-using.html (Fiddler)
http://blog.sharepointsite.co.uk/2010/06/sharepoint-2010-developer-dashboard.html (Dev Dashboard)
http://blog.sharepointsite.co.uk/2011/01/developer-dashboard-custom-scoped.html (Dev dashboard & monitoring)

Update 26 Aug 2014
AlertFox is a useful tool for verifying page performance.  It has multiple useful functions such as monitoring the website is up and running macros to login, perform searches.  All these tasks record the time it takes and you can monitor performance from various locations such as the US or Europe. 

Monday 21 October 2013

SQL Server 2012 for SharePoint 2013 checklist

 Checklist for SQL Server 2012 for SharePoint 2013
  1. Use multiple: SQL Aliases (separate 1 for search).
  2. Dedicate SQL Server for SharePoint.
  3. Set max degree of parallelism (MAXDOP) to 1 for instances of SQL (SP will do this when SP is installed).  Number of processes for each SQL statement.
  4. Mixed mode authentication - Don't install SQL 2012 for SP in mixed mode auth unless you have good reason (the only reason I have heard of is from Todd Klindt's podcast that mentions Access Services needs to use the SA account.  If you have other database that need SQL permission access consider moving them to a dedicate SQL instance.
  5. SQL Server 2012 AlwaysOn Availability Groups are a new high availability and disaster recovery solution that are an alternative to database mirroring and log shipping solutions. AlwaysOn Availability Groups support a set of primary read-write databases and up to four sets of secondary databases that can be set as read-only.
  6. Memory: You can set the max memory each SQL instance can use.  If the machine is dedicate to only provide SQL for SharePoint, the max setting is total memory minus 4GB for the OS.  See image 3.
  7. Model DB: Increase initial size and autogrowth settings - fix growth sizes.  I would start  with 100MB for the mdf and 20MB for the ldf for initial sizes.  Autogrowth on content db's I start with 50MB growth for the mdf and 25MB for the log file (ldf).  See image1 below.
  8. Model DB: Don't modify DB Collation after install.
  9. Model DB: Use Full recovery model on the Model system database - Simple prevents large log files. 
  10. Avoid giant ldf log files ... (don't use DCC_Shrink to resize ldf files with switching to the simple recovery model, it breaks the LSN/log backup chain).  ldf growth is far more resource intensive than mdf files growth, the problem I see with Content db growth is the IT pro lets the ldf get out of control, then backs up and shrinks the database.  Usage causes the ldf to autogrow periodically and the farm goses back to needing the process repeated with heavy growth issues.  Key is to ensure the lfd has a decent initial size (you can work this out between full backup cycles), the ldf for content db's should rarely need to autogrow and when it does make it a fixed amount.
  11. TempDB: Having multiple mdf tempdb files speeds up SQL performance.  The tempdb is a system db that has resource available to all users.  And from the expert
  12. TempDB: Increase it's initial size  & autogrowth to MB as opposed to percent (see image2).
  13. TempDB: Simple recovery model for TempDB is correct.
  14. TempDB: The default is 1, you need more than this depending on how many CPU cores are on your database server.  1 option is set the number of TempDB's to the same as the number of CPUI cores (1-to-1).  Some folks recommend the number of tempDB's should be 1 less than the number of cpu cores, other folks go for 1 TempDB per 2 CPU cores.   I start with 4 and tune in performance testing or once it is running.  Saying that I normally have 16 cores.  I can't see performance gains from my testing after 4 TempDB's but as a rule I'd start with 1 TempDB for each 2 Cores and tune from there. 
  15. TempDB: Move the tempdb.mdf files and the TempDB .ldf file to their own fast as possible drive.
  16. Content DBs: CA or PS when creating a new content db won't take all the model settings, it does take initial sizes but not the autogrowth settings.
  17. Content DBs: Workaround- create the db outside of PS, get the SP collation right "Latin1_General_CI_AS_KS_WS" collation.
  18. If your SQL db uses spinning disk split the mdf and ldf files onto separate disks.  Order the db files as follows: tempdb must be on the fastest disk, content db log files next and content db's next.
  19. Change the default backup location to a separate disk (pretty obvious but it is the default setting). 
  20. Set the default for the database instance file locations  - set the default location where new mdf, ldf and backups will go on disk (per your fasteset disk calcs).
  21. Set the default for the database instance backup compression - I'd go with compression for all backups.
  22. mdf and ldf should be on separate drives for 2 reasons : IOPS speed (provided this is spinning disk) & DR (you don't want to loose both).
  23. OS: NTFS allocation unit, by default on Windows 2008 this is 4096 byte (4kb), generally much faster to have it set to 64kb allocation unit size.  e.g. cmd>Chkdsk c: 
  24. Use RAID 10 where possible.
  25. Windows firewall - if using it you will need to open the incoming SQL port i.e. 1433
  26. Avoid huge transaction logs and size them appropriately.  Pref don’t use simple recovery model. Ldf content is not removed every 60 seconds when it is written to the mdf files. Backup – get last full backup and last differential to get you to the lasted backup version.  Or get the current ldf, restore the last full backup and play the current ldf through the db.  To slim the ldf down, after a successful full backup, can backup the transaction with "truncate the transaction log" (it zeros all transaction before the checkpoint made by the transacion log backup or (sic. delete the log file) to get it back to a reasonable size.  Hint: BACKUP LOG databasename TO devicename
  27. Watch the size of the Content Databases, they take time to recover.  Max up to 4TB, try stick to around 200GB (exception will be for blob storage).  This makes backup and restore quick however AlwaysOn also changes the scenario.
  28. Format the Drives with 64K NTFS Allocation Units.
  29. Antivirus software must exclude LDF/MDF/NDF files.
  30. Don't shrink database log files by switching the recovery model to Simple.
  31. Ensure you are within the latency recommendation for SP to SQL (< 20ms).
Image 1. Change the model database initial size and autogrowth settings.
Tip: AutoGrowth of SharePoint 2013 Content databases - Changing the initial size of the model db will affect the content db's - nice, issues is that the autogrowth settings in the model db are not pushed to the content databases created through SharePoint (either CA or PowerShell).
Image 2. Change the TempDB to have multiple mdf files.


Image 3. Setting Memory on SQL Server instances.
SharePoint Sizing Starting point notes:
SP_Config database - "Transaction log files. We recommend that you back up the transaction log for the configuration database regularly to force truncation." Technet.   The Full recovery model is the default, switch this to Simple.  If you need Full, your ldfs be busy.  Suggest ldf at least 1000MB per day growth, can be a lot more.

Suggested Search database sizing.  If the Search databases are in the Full Recovery mode you also need to set the ldf sizing.
Database mdf mdf growth ldf ldf grow
SP_Search_Admin 100 MB 10 MB 100 MB 50 MB
SP_Search_CrawlStore 100 MB 50 MB 300 MB 100 MB
SP_Search_AnalyticsReportingStore 100 MB 50 MB 25 MB 25 MB
SP_Search_LinksStore 100 MB 50 MB 25 MB 25 MB

Update 23 Jan 2014:  Todd Klindt has a good set of blog posts on SQL 2012 for SharePoint 2013.

How do ldf files work with mdf in SQL Server:
Content goes into .ldf file temporarily, checkpoint occurs every minute and moves from .ldf to mdf.  If the "Full recovery model" is used the content in the the ldf file is retained. Hence large trans logs but recovery is better.  If a simple recovery model is used, the ldf data is dumped.


Keith Tuomi provide the code to automatically change the autogrowth sizing.
01$Server="SQLSRV2012"
02[System.Reflection.Assembly]::LoadWithPartialName("Microsoft.SqlServer.SMO") | out-null
03$SMOserver = New-Object ('Microsoft.SqlServer.Management.Smo.Server') -argumentlist $Server
04$databases = $SMOserver.Databases;
05foreach ($DB in $databases | where{$_.Name -like '*Content*'}) {
06 #Set Log File growth
07 foreach ($DBLF in $DB.logfiles) {
08 $DBLF.set_GrowthType("KB");
09 $DBLF.set_Growth("51200"); #50mb
10 $DBLF.Alter();
11 }
12 #set File Growth
13 $DBFG = $DB.FileGroups;
14 foreach ($DBF in $DBFG.Files) {
15 $DBF.set_GrowthType("KB");
16 $DBF.set_Growth("102400"); #100mb
17 $DBF.Alter();
18 }
19 }

SQL Licencing:
There are numerous licensing models available to SQL server on the different versions and I find them extremely complex. For large SQL using the Enterprise Edition of SQL 2012, per-core licensing at the hardware (hypervisor) level is an option. The SQL instances can be tied to specific hardware. Affinity rules do need to be setup to prevent vMotion moving the VM to another hardware host.  In a HA situation using AOAG in a passive situation, the secondary SQL servers et al. will require licencing.

SQL Installation:  I slipstream and automatically install SQL Server 2012.  The checklist below lists the SQL features you can install.  Determine your needs makes creating the config.ini files used by the install much easier to do.  The example below is used to create both my primary SQL Server and the secondary (AOAG) server, they are identical.  The choices are pretty standard, you may want to move the Reporting Services features to another server or remove if you are not using them.

 Feature  Install
Database Engine Services Y
SQL Server Replication Y
Full-Text Search N
Data Quality Service N
Analysis Services N
Reporting Services: Native N
Reporting Services: SharePoint Y
Reporting Services Add in for SharePoint Y
Data Quality Client N
SQL Server Data Tools N
Integration Services Y
Client Tools Connectivity Y
Client Tools Backward Compatibility Y
Client Tools SDK N
Documentation Components N
Management Tools Basic Y
Management Tools Complete Y
SQL Client Connectivity SDK N
Master Data Services N
Distributed Replay Controller N
Distributed Replay Client N


More Info:
http://yalla.itgroove.net/2013/03/sql-server-powershell-sharepoint-set-autogrowth-on-content-dbs/
http://blog.cloudshare.com/2013/08/28/how-to-use-the-same-autogrowth-value-for-sharepoint-content-databases/
http://technet.microsoft.com/en-us/library/hh292622.aspx
http://channel9.msdn.com/Series/Tuning-SQL-Server-2012-for-SharePoint-2013/Tuning-SQL-Server-2012-for-SharePoint-2013-03-Server-Settings-for-SQL-Server (Excellent)
http://www.sqlskills.com/blogs/paul/a-sql-server-dba-myth-a-day-1230-tempdb-should-always-have-one-data-file-per-processor-core/
http://www.brentozar.com/blitz/blitz-result-percent-growth-use/
http://www.brentozar.com/archive/2008/03/sql-server-2005-setup-checklist-part-1-before-the-install/
http://www.sqlskills.com/blogs/kimberly/8-steps-to-better-transaction-log-throughput/
http://www.toddklindt.com/default.aspx
SharePoint database types and description

Thanks to:
@allanSQLIS - Allan Mitchell - great sitting next to a SQL expert.

Steve Goodyear has a blog post on Farm Install and build guide.  I haven't used it but it is a good post to check you are ready for your install and you have done the big steps.

SQL Hardening:

From http://blogs.technet.com/b/rycampbe/archive/2013/10/14/securing-sharepoint-harden-sql-server-in-sharepoint-environments.aspx
Hardening SQL Server is done in a 3 phased approach:
  1. Encryption at Rest (Encrypt the data sitting on the hard drives)
  2. Encrypt Connections (Encrypt the data in flight on the network between servers)
  3. Server Isolation (Configure SQL Server's firewall to ignore requests from unauthorized servers)
Transparent Data Encryption (TDE) can be used to encrypt any SharePoint database.  This will encrypt the mdf and ldf files, this ensures that even is the hard disk storage is comprimised, the mdf and ldf cannot be used to restore the databases using the SQL restore tools.  There are a lot of ramifications to using TDE so review the decision to use TDE carefulkly before implementing.
 

Monday 14 October 2013

Alternate Access Mapping from 443 to http

Overview: Pretty simple to use Alternate Access Mapping (AAM).  In my scenario I have an https site already that is working.  A hardware load balancer is added to the environment which does the SSL offloading.  So now I need to be able to send it http/port 80 requests.

2 Steps:
1.> Edit the binding in IIS

2.> Created AAM settings in Central Admin (CA)
CA > Configure alternate access mappings  > Add you https site to the existing site.






More Info:
http://blogs.msdn.com/b/fabdulwahab/archive/2013/01/21/configure-ssl-for-sharepoint-2013.aspx

ULS log files are created but are empty

Problem: I have created a SharePoint 2013 farm with a custom location for my log files e.g. d:\logs.  On the 1st SP2013 VM that created the SP farm using AutoSPInstaller, the logs are present and logging correctly.  On the other/remaining SP VM's, the logs are created every 30 min however, the lof files are empty.

Initial Hypothesis: I had no idea and after looking at google the answer is permissions on the local file system.  Justin Kobel's post gives the fix.

Resolution:
Or run this Powershell on each VM:
PS> $computer= "$env:computername"
PS> $group="Performance Log Users"
PS> $domain="demo"
PS> $user="pbeck"
PS> $de = [ADSI]"WinNT://$computer/$Group,group"
PS> $de.psbase.Invoke("Add",([ADSI]"WinNT://$domain/$user").path)
PS> net stop SPTraceV4
PS> net start SPTraceV4
 
Tip: Using the Invoke-Command you can loop thru and connect to the SP VM's remotely and run the change on each VM in the SP far.  PS to loop thru and change local security groups is here and the xml is here.

 
 
More Info:
You will notice that the files within Windows explorer are shown with a blue colour.  This is to show the files/folders are compressed. 
Blue means compressed.
Green means encrypted.

Friday 11 October 2013

Testing my SharePoint 2013 Network Load Balancer

Overview:  This is how I tested my Kemp load balancer.  Kemp terminates the SSL and has a load balancer that checks the http service is running.  I still like to use session persistence for load balancing.

Fiddler is useful from the client, you can check that SSL is getting correctly written by the Load Balanacer. 
Microsoft Network Manager 3.4 is useful to watch the traffic between the WFE and the load balancer.  WireShark is also good option.  This role would probably best be performed by using Fiddler as a reverse proxy to capture the traffic (I never done this).


SharePoint 2013 has the Request Management Service that acts like a load balancer for traffic.  I don't understand the point and I would need a rather strange scenario to use Request Management if I have a decent load balancer in place (KEMP or F5).


Updated 17 Aug 2015: All the Load balancer solutions (F5, Cisco, Kemp etc.) have traffic distribution, it is a good idea to use a more advanced algorithm.  For instance using an F5, setting to use the "Dynamic Ratio" algorithm redirects traffic based on continuous monitoring of the servers resources.  F5 has many options I prefer using the "Dynamic Ratio" but it depends on the circumstances.