Sunday 26 January 2014

How to test SharePoint 2013 applications


Overview: There are several forms of testing and this post aims to provide basic testing information to assist you make sure my SharePoint application are fit for purpose.  Basically, functional testing including the UI is essential but performance testing is another big win for large enterprise application on Share.Point.

Testing using Selenium for SharePoint
UI Testing and Monitoring

Create a Test Plan - consists of:
  1. Test configurations: The combinations of processor type, operating system version, and browser version on which you want to test the application.
  2. Test environment: The number and characteristics of the computers on which to run the tests. Set this if you are testing a web-based or distributed application.
  3. Test settings: The types of data that you want to collect while running the tests.
http://msdn.microsoft.com/en-us/library/vstudio/dd286583.aspx
http://visualstudiogallery.msdn.microsoft.com/e79e4a0f-f670-47c2-9b8a-3b6f664bf4ae/

Below are the results from a single server VM running a simple load test.  This is VS2012 on Windows 2008R2 against SP2013 with SQL 2012.





Burp Suite is a good easy to use Penetration testing tool and as always Fiddler has numerous uses for testing.

HP's WebInspect is also a good penetration/security testing tool:
http://www8.hp.com/uk/en/software-solutions/webinspect-dynamic-analysis-dast/

Tip: SAST & DAST are application security testing methodologies used to find vulnerabilities.

Monday 13 January 2014

CU Upgrading On Prem Office Web Apps 2013

Problem: I need to upgrade my WCA farm from the RTM version to the March 2013 CU to allow pdf's to be displayed with Office Web Apps.


Steps to upgrade an Existing WCA farm:
  1. Copy exe to machine: OWA1 & OWA2 (D:\OfficeWebApps\March 2013 CU)
  2. Remove secondary servers from farm.  In my case this is WCA2, remote into SP-WCA2 and run PS> Remove-OfficeWebAppsMachine (on WCA2)
  3. Run exe on secondaries (WCA2), Reboot, shut down and snapshot & then on the primary (WCA1)
  4. Check primary: verify the url works: http://wcauat.demo.dev/hosting/discovery this can be done on the local machine or using https from a client machine.
  5. Create a new OWAFarm, this will run over the top of your exisitng farm   PS> . missing Add-OfficeWebAppsFarm... (on WCA1)
  6. Join Secondary to farm    PS> new-officewebappsmachine –machinetojoin “sp-wca1”(Primary)
  7. Activate WordPDF PS>  New-SPWopibinding –servername “wcauat.demo.dev” –application “WordPDF” -allowhttp
Perform step 7 on the SharePoint farm.

More Info:





You can verify the version of you WCA farm using (not sure this reports the correct version): 
(Invoke-WebRequest https://wcauat.demo.dev
jsonAnonymous/BroadcastPing).Headers["X-OfficeVersion"]

An easier approach is to use Fiddler to monitor the http/https traffic to figure out Office Web Apps farm version:
Open IE
Open Fiddler
In IE, go to the url https://wca.demo.dev/m/met/particiapant.svc, replace "wca.demo.dev" with your Office web apps service url. 
In Fiddler, review the response header, you will see the response header X-OfficeVersion with a version number.

 

Saturday 11 January 2014

SharePoint 2013 Search Limits with an example

Overview:  This post aims to provide guidelines for building SharePoint 2013 Search farms.  There are 6 Search components (labelled C1-C6 below) and 4 database types (labelled DB1-DB4).  Index partitions are a big factor is search planning.

Example: Throughout this post I provide an example of a 60 million item search farm with redundancy/High Availability (HA).

Index partitions: Add 1 index partition per 10 million items is the MS recommendation, this really depends on IOPs and how the query is used.  An twinned partition (partition column) is needed for HA, this will improve query time over a single partition.  
Example: So assuming a max of 10 million items per index, to have a HA farm for 30 million items requires 6 partitions.

Index component (C1): 2 index components for each partition.
Example: 12 index components.

Query component (C2): Use 2 query processing components for HA/redundancy, add an additional 2 query components at 80 million items increase. 
Example: 2 Query components.

Crawl database (DB1): Use 1 crawl database per 20 million items.  This is probably the most commonly overlooked item in search farms.  The crawl database contains tracking and historical information about the crawled items. It also contains info such as the last crawl id, time etc, crawl history.  Crawl component feeds into the crawl database.  Medium usage should be under 100GB.  Add more content database before 20 million or 100GB database size.  My initial size is mdf 100 MB (growth 50MB) and the ldf is 300 MB (growth 50MB).
Example: 3 crawl databases at 20 million items each allows for a search farm containing 60 million items.

Link database (DB2): Use 1 link database per 60 million items.  I believe 1 link database will handle up to 100 million items.  Mdf 100 MB (growth 50 MB) and Ldf 25 MB (growth 25 MB).
Example: 1 link database.

Analytics reporting database (DB3): Add 1 search analytics reports database for each 500,000 unique items, viewed each day or every 10-20 million total items.  This is the heavy search database.  Add a new database to keep each Analytics reporting database under +-250GB.  Mdf 100 MB (growth 50 MB) and Ldf 25 MB (growth 25 MB).
Example: Start with 1 and grow as needed.

Analytics Processing Component (C3):
Example: 2-4 Anaytics Processing components.

Content Processing Component (C4): processes crawled items and moves the item data to the index component. It's function is to parses documents, performs property mapping and entity extraction, perform language processing, and ultimately moves crawled items into indexed items.
Example: 4 Content Processing components.

Admin component (C5): Use 1 administration components or 2 search for redundancy/HA.  For all farm sizes.
Example:2 Admin components.

Admin database (DB4): Low usage, even in big farms, you only need 1 database.  Should stay well under 100GB.  Holds crawled and managed properties, query rules, topology and history.  My initial size is mdf 100 MB (growth 10MB) and the ldf is 100 MB (growth 50MB).
Example: 1 Admin database. 

Crawl Component (C6):  The crawl component crawls content sources and delivers crawled items including metadata to the Content Processing component.  In SP2013 you don't specify the relationship between the crawl database and the crawl component.  The crawl component will distribute to all available crawl databases.  The 3 types of crawls available in SP2013 are: Full, Incremental and Continuous (only works for SP2013 content).  Schema changes still require a full crawl to pickup the change in SP2013.  Crawl does not do as much analysis as was the case in SP2010 so it is a much lighter/faster process.
Example:2 Crawl components allows for HA and improved performance

Database Hardware: for the example use 8CPUs, 16GB of Ram, disk size depends on content but it is smaller than SP2010.

Placing components on VMs for the example
Group your search roles onto servers:
  • Index & Query Processing
  • Analytics & Content Processing
  • Crawl, Content processing & Search Admin 

Note: I have included suggested mdf and ldf sizing and growth assuming the full recovery model as I use AOAG (if you are using the default simple recovery model, only worry about the mdf sizes), these are based on my farms usage so your will need to vary but it is a good guide for a starting point untill you can monitor your own database growth patterns. Change the ldf and mdf settings as the default database settings are completely inappropriate. Growth must be in fixed MB (never percentages) and you do not as little ldf growth on the fly as possible. A good guidline is 100 MB mdf initial size with 50 MB growth and ldf are 25-50% of the size of the ldf and I would use 50MB min for ldf initial sizing, then set the ldf growth to be 50MB on all 4 search databases. Search ldfs are pretty hectic so in this post you will noting much higher ldf setting than I am mention in this note. Also checkout the SQL Checklist for SP2013 post. Backup frequency affects log/ldf file usage so check out this post to understand how your system database needs to be set (In short, backups frequency requires smaller ldf's).

Database mdf mdf growth ldf ldf grow
SP_Search_Admin 100 MB 10 MB 100 MB 50 MB
SP_Search_CrawlStore 100 MB 50 MB 300 MB 100 MB
SP_Search_AnalyticsReportingStore 100 MB 50 MB 25 MB 25 MB
SP_Search_LinksStore 100 MB 50 MB 25 MB 25 MB

More Info:

Troubleshooting Crawl

Thursday 9 January 2014

SP 2013 SSRS failing after RBS enabled and disabled

Problem: I have SSRS (SharePoint mode) enabled on my SharePoint 2013 farm using SQL 2012 which was working, I enabled AvePoint's RBS provider on the farm and enabled RBS on 1 out of 2 web applications.  I then disabled RBS on the web applications and assumed all was good, RBS stopped working and threw 1 of 2 errors on the RBS enabled then disabled web application:
"For more information about this error navigate to the report server on the local server machine, or enable remote errors" or ....

Note: Rdl in the system before RBS was enabled work, during RBS don't work and rdl's added after disabling RBS both fail

Initial Hypothesis/Error tracking:
1.> SSRS and WCA errors unfortunately don't get correlationId's, so I turned off all the SSRS SSA instances except 1 so I know which server to find the error on in ULS log. 
2.> I ran the request for the report (rdl, this report just displays a label so I know it is good) again so the error is captured in the ULS log.
3.> I painfully open the latest log using ULS viewer and scan for errors and I find: 
System.Data.SqlClient.SqlException (0x80131904): The EXECUTE permission was denied on the object 'rbs_fn_get_blob_reference', database 'SP_Content_PaulXX', schema 'mssqlrbs'.
at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection, Action`1 wrapCloseInAction)

4.> Now I am going nuts as RBS has been disabled and I decide to trace the request in SQL Profiler, I can't find the call in SQL Server profiler and while looking for it I realise the function is not in the database.  I also find a post suggesting changing permissions but as I don't have the function, permissions isn't my issue. 
5.> I start looking at RBS on the farm using PowerShell PS>
cls
$cdbs=Get-SPContentDatabase
foreach ($cdb in $cdbs)
{
 $rbs=$cdb.RemoteBlobStorageSettings
 Write-Host "Content DB:" $cdb.Name
 Write-Host "Enabled:" $rbs.Enabled

}
I notice that the content database  'SP_Content_PaulXX' mentioned in the ULS log has the RemoteBlobStorageEnabled flag set to true.

Resolution:

*************************************************
Problem: I have SP2013 + SP2012, I am using SSRS in SP mode.  My app pool accounts for my web app and my SSRS SSA are different and on separate servers.  So for this to occur, you need SP2013, SSRS, RBS Enabled (or the Content database still thinks RBS is enabled), additionally the service account used by the SSRS SSA needs to have minimal permissions.  Existing rdl files display correctly however any rdl files added throw an exception.  The diagram below further explains the scenario:
Initial Hypothesis: Trawling through the ULS logs show the error: System.Data.SqlClient.SqlException (0x80131904): The EXECUTE permission was denied on the object 'rbs_fn_get_blob_reference', database...

A snippet of the ULS is shown below:

 
Resolution:  The app pool account used by the SSRS Service Application needs to have permissions to run the function. 
1.> Figure out the app pool account used by the SSRS SSA if you don't know it as shown below:

2.> Give the SP_Services account permissions over the erroring execution calls.  To prove it give the account dbo rights.

 
Thanks to Sam Keytel and Mark Oburoh for looking at this with me.
 

More Info:
 
 


Tuesday 7 January 2014

Office Web App Common Problems & Fixes

There is now a article on WCA on Technet that includes troubleshooting.  Updated 30/04/2014.

Finding your issues: Office Web Apps (WCA) displays and error, without a correlation Id.  If you have a small WCA farm, you can trawl the WCA ULS logs using ULS eventviewer to find your issue.  However on a busy or large farm finding the ULS errors is tedious.  You can use IE development tools, fiddler or any tool provided you can find the correlation Id send in the response from the WCA server.

The screenshot below shows how to use fiddler to find a correlationId returned from the WCA server.  In the browser view the http response and find the "X-CorrelationId" property.

Tip: It is also worth verify the WCA farm is reachable and running using https://wcaservername/hosting/discovery

==========================================

Problem: When opening a work or pdf document I receive the following error popup "Sorry, there was a problem and we can't open this PDF. If this happens again, try opening the PDF in Microsoft Word.".
Initial Hypothesis:  WCA was working and pdf's were opening.  The error is similar to the error received when the networking is not correct (WCA machines can't access the SharePoint WFE's).  Opening the ULS logs on the WCA machines, I can see the following error message "WOPI CheckFile: Catch-All Failure [exception:Microsoft.Office.Web.Common.EnvironmentAdapters.UnexpectedErrorException: HttpRequest failed ---> Microsoft.Office.Web.Apps.Common.HttpRequestAsyncException: No Response in WebException ---> System.Net.WebException: Unable to connect to the remote server ---> System.Net.Sockets.SocketException: No connection could be made because the target machine actively refused it 10.189.xx.15:443"   

Resolution:  My load balancer is not passing through the traffic from the WCA servers using https to the WFE's.  Fixed the loadbalance so https traffic is forwarded correctly.  The WCA servers need to speak to the WFE/SharePoint servers either on http or https depending on how the WCA farm is configured (SSL termination, with or without ssl are the 3 options).

===========================================

Problem:  Can't open any document in WCA and the WCA ULS is generating the following issue:
WOPI CheckFile: Catch-All Failure [exception:Microsoft.Office.Web.Common.EnvironmentAdapters.UnexpectedErrorException: HttpRequest failed ---> Microsoft.Office.Web.Apps.Common.HttpRequestAsyncException: No Response in WebException ---> System.Net.WebException: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel. ---> System.Security.Authentication.AuthenticationException: The remote certificate is invalid according to the validation procedure.   

Initial Hypothesis:  On each specific WCA server, I try open the SharEPoint web Application i.e. https://www.demo.dev, the browser displays that the certificate has errors "Certificate error".  Opening up the certificate and the certificate chain looks correct.

Resolution:
1.> Run > MMC > File > Add snap-ins > Certificates > Add > Computer Account > Local Computer > Finish. OK.
2.> In the MMC console navigate to Certificates > Trusted Root Certificate Authorities > Certificates > (Right click) All Tasks > Import (both the Trust root and the intermidiary certificates required.
After adding the missing certificates, open the the browser and check the certificate.  Thank to David C for documenting and figuring out the issue is the certificate chain being used on the WCA servers.  Office web apps is working again.

===========================================

Problem: When Opening a word document I get the following error when Office Web Apps tries to render the document "There's a configuration problem preventing us from getting your document. If possible, try opening this document in Microsoft Word."

Initial Hypothesis: Check the ULS logs on the Web Front End (SharePoint Server) as this doesn't tell me much.  I found the following issue in my ULS:
WOPI (CheckFile) - Invalid Proof Signature for file SandPit Environment Setup.docx url: http://web-sp2013-uat.demo.dev/Docs/_vti_bin/wopi.ashx/files/6d0f38c0d5554c87a655558da9cedcad?access_token...
Resolution: Run the following PS> Update-SPWOPIProofKey -ServerName "wca-uat.demo.dev"

More Info:
http://technet.microsoft.com/en-us/library/jj219460.aspx

===========================================

Problem:  When opening a docx file using WCA (Office Web Apps) I get the following error mes: "Sorry, there was a problem and we can't open this document. If this happens again, try opening the document in Microsoft Word."
I then tried to open an excel document and got the error: "We couldn't find the file you wanted.
It's possible the file was renamed, moved or deleted."

Initial Hypothesis: Checked the ULS logs on the only OWA server and found this unexpected error:
 HttpRequestAsync, (WOPICheckFile,WACSERVER) no response [WebExceptionStatus:ConnectFailure, url:http://webuat.demo.dev/_vti_bin/wopi.ashx/...
This appears to be a networking related issue, I have a NLB (KEMP) and I am using a wildcard certificate on the WCA adr with SSL termination.

Resolution:  The error message tells me that it can't get back to the SharePoint WFE servers from the OWA server.  The request from the WCA/OWA1 server back to the SP front end server is not done using https but http.  I have an issue as my nlb can't deal with traffic on port 80.
I add a host entry on my OWA1 server so that traffic to the SharePoint WFE goes directly to a server by IP and it works.  This means i don't have high availaibilty.  A NLB service dealing with the web application on port 80 will fix my issue.

The OWA Server need to access the web app on port 80.  My NLB stopped all traffic on port 80.

==========================================

Problem:  The above issue was temporairly fixed by adding a host entry on the WCA1 server so that using the url of the web application on port 80 would direct the user back to WFE1.  I turned on WCA2 and WFE2 so I now have 2 SharePoint Web front ends & A 2 server WCA farm.  In my testing I have docs, doc and excel files.  From multiple locations I could open and edit the docx and doc files but opening the excel file gave me this issue: "Couldn't Open the Workbook
We're sorry. We couldn't open your workbook. You can try to open this file again, sometimes that helps."
 

Initial Hypothesis: Documents are cached and the internal balancing seemed to make word document available using office web apps.  I assume the requests are coming out the cache or from OWA that has the host entry.  I need to tell OWA where to go via a host entry or NLB entry.  Note: using a host entry won't make the OWA highly available/redunadant.  This is the same issue as mentioned in the problem above.

Resolution: I added a host entry to the WCA2 server, it points to the WFE1 machine.

==========================================

Problem: Opening docx or pptx files in Office Web Apps 2013 results in the error "Sorry, Word Web App ran into a problem opening this document. To view this document please open it in Microsoft Word."

 Resolution:  I don't like it but I had to remove the link to the WCA farm, rebuild the WCA farm and hoop SP2013 back to the WCA farm.  [sic].  Other documents were opening and I realised that my bindings were incorrect after I rebuilt.

============================
Problem: Can't open word, pdf, pptx or excel documents using Office Web Apps.  ULS on the OWA servers included these log messages: WOPI (CheckFile) - Invalid Proof Signature for file.  WOPI Proof: All WOPI Signature verification attempts failed.  WOPI Signature verification attempt failed with public key. 
Also found in the logs: "Error message from host: Verifying signature failed, host correlation" "
HttpRequestAsync (WOPICheckFile,WACSERVER), request failure [HttpResponseCode:NotFound, HttpResponseCodeDescription:Not Found, url:https://www.demo.dev/_vti_bin/wopi.ashx/files/8b07d55558955551beb5555bed545553?access_token=REDACTED_1014&access_token_ttl=1392256555993]"

Same issue ULS excerpt:  
Error message from host: Verifying signature failed, host correlation:
WOPI CheckFile: Catch-All Failure [exception:Microsoft.Office.Web.Common.EnvironmentAdapters.FileUnknownException: WOPI 404   
 at Microsoft.Office.Web.Apps.Common.WopiDocument.LogAndThrowWireException(HttpRequestAsyncResult result, HttpRequestAsyncException delayedException) 
 
FileUnknownException while loading the app.


Hypothesis: None, I can't understand why the SP and WCA farm are struggling to communicate.  I believe the cause is to do with the the load balancing/network changing [sic - maybe]. 

Resolution: Remove the link between the Sp farm and WCA
PS> Remove-SPWOPIBinding –All:$true
Connect SP to the WOPI farm
PS> $internalName = "wca.demo.dev"
PS> $internalZone = "internal-https"
PS> New-SPWOPIBinding -ServerName $internalName –AllowHTTP
PS> Set-SPWopiZone -zone $internalZone
 

==============================
 Problem: Intermitten requests are not returning the pdf/word documents.  Most requests are working and occasionally 1 request doesn't work.  Every 4th request tries to get the pdf to display on Office Web Apps for a few minutes without any error message and then stops trying and displays the message "Sorry, Word Web App can't open this ... document because the service is busy."

Initial Hypothesis:  Originally I thought it was only happening to pdfs but it is happening to word and pdf documents (I don't have excel docs in my system).  My monitoring software SolarWinds is badly configured on my OWA servers as the monitor is showing green, drining into the servers monitoring the 2 application monitors are failing.  The server should go amber if either of the 2 applications fail and in turn red after 5 minutes.  At this point I notice that I can't log onto my 4 OWA/WCA server.  Web request are not being returned.  I look at my KEMP load balancer and it says all 4 WCA servers are working, I notice the configuration is not on web requests but on ping (not right) and the NLB/KEMP is merely redirecting every 4th request to the broken server.

Resolution: 
  1. Reboot the broken server, once it comes up I can make http requests directly to http://wca.demo.dev/hosting/discovery the server: and it's all working again.
  2. SolarWinds monitoring is lousy - need to get it fixed
  3. Kemp hardware load balancing needs to be changed from checking the machine is on to rather checking each machine using a web request.




 

Thursday 2 January 2014

IIS setting for SharePoint 2013

Some checks and reminders for IIS - This is a work in progress!

1.> Change the IIS log location for existing websites, this needs to be done on each WFE in your farm, providing you want to change them. 
PS Script to Change the IIS log directory for existing web sites.
2.> Disable IIS recycling
3.> Ensure app pool accounts have low levels of network permissions.
4.> Certificates used by IIS, when do they expire.
5.> Application Initialisation for IIS8 or warm-up scripts to stop the long delays after and IISREST/app pool recycle.

   **************
CPU over utilisation


   ********************

Verify when certificates are going to expire:

import-module webadministration
$DaysToExpiration = 365
#change this once it's working
$expirationDate = (Get-Date).AddDays($DaysToExpiration)
$expirationDate5yrs = (Get-Date).AddDays(1020)
$certs = Get-ChildItem IIS:SSLBindings
foreach($cert in $certs)
{
 $store = $cert.Store.ToString()
 Write-Host " Cert Store:" $cert.Store.ToString()
 Write-Host " Cert Port:" $cert.Port.ToString()
 Write-Host " Cert Thumbprint:" $cert.Thumbprint
  $body = Get-ChildItem CERT:LocalMachine/$store
  foreach ($me in $body) {

   if ($expirationDate -gt $me.NotAfter) {
    Write-Host " Expiring soon" -BackgroundColor red
   }
   elseif ($expirationDate5yrs -gt $me.NotAfter) {
    Write-Host " Expiring in 5 years" -BackgroundColor Yellow
   }
   elseif ($expirationDate -le $me.NotAfter) {
    Write-Host " Expiring more than a year away" -BackgroundColor green
   }
   Write-Host " - Body subject: " $me.Subject
   Write-Host " - Body thumbprint: " $me.Thumbprint
   Write-Host " - Body fiendly name: "$me.FriendlyName
   Write-Host " - Body Expiry: "$me.NotAfter
  }
     Write-Host ""
}

 *********************
 Check the service account do not have too many permissions:
Script below retrieves pswd to show client potential issue

Import-Module WebAdministration
$webapps = Get-WebApplication
foreach ($webapp in get-childitem IIS:\AppPools\)
{
$iispath = "IIS:\AppPools\" + $webapp.name
$pswd = $webapp.processModel.password
$state = (Get-WebAppPoolState -Name $webapp.name).Value
$color = "White"
$forecolor = "Black"
if ($pswd.Length -gt 0)  {$color = "red"} # verify the domain accounts don't have excessive priviges
if ($state -eq "Stopped")  {$forecolor = "blue"} #Why are there stopped IIS websites
Write-Host "Name:" $webapp.name " | Version:" (Get-ItemProperty $iispath managedRuntimeVersion).Value `
" | Username:" $webapp.processModel.userName " | Pswd:" $pswd `
" | State:" $state -BackgroundColor $color -ForegroundColor $forecolor
}

 Tip: Advise client to change Windows service account used to run the SP timer job.  Check ramifications.


**********************************

Understanding SQL backups on SharePoint databases using the Full recovery mode

Overview:  This post looks at reducing the footprint of the ldf file.  SharePoint related databases with using the Full recovery mode keep all transaction since the last "differential". It explains how SQL is affected by backups such as SQL backups, SP backups and 3rd party backup tools (both SP backup and SQL backup tools).

This post does not discuss why all your databases are in full recovery mode or at competing backup products.  It also contains steps to truncate and then shrink the size of the transaction log.

Note: Shrink a ldf file for it to regrow each week/cycle is bad practice.  The only time to shrink is when the log has unused transactions that are already covered by backups.

Background
  • Using the full recovery model allows you to restore to a specific point in time.
  • A Full backup is referred to as your "base of the differential".
  • A "copy-only" backup cannot be used as a "base of the differential", this becomes important when there are multiple providers backing up SQL databases.
  • After a full backup is taken, "differentials" differential backups can be taken.  Differentials are all changes to your database since the last "base of the differential".  They are cumulative so obviously, they grow bigger for each subsequent differential backup until a new "base of the differential" is taken.
  • To restore, you need the "base of the differential" (last full backup) and the latest "differential" backup.
  • You can also back up the transaction logs (in effect the ldf).  These need to be restored in sequential order, so you need all log file backups.
  • If you still have your database you can produce the "tail-log" backup this will allow you to restore to any point in time.
  • Every backup get a "Log Sequence Number" (LSN), this allows the restore to make a chain of backup for restore purposes.  This chain can be broken using 3rd party tools or switching the database in the simple recovery mode. 
  • A confusing set of terminology is "Shrinking" and "Truncating" that are closely related.  You may notice an ldf file has got extremely large, if you are performing full backups on a scheduled basis this is a good size to keep the ldf at.  You don't want to grow ldf files on the fly as it is extremely resource intensive.  However say your log file has not been purging/removing transactions within a cycle, then you may have a completely oversized ldf file.  In this scenario, you want to perform a full backup and truncate your logs.  This remove committed transactions but the unused records are marked an available to be used again.  You can now perform a "shrink" to reduce the size of the ldf file again, you don't want ldf's growing every cycle so don't schedule the shrinking.
  • "Truncating" is marking committed transactions in the lfd as "free" or available for the writing new transactions too.
  • "Shrinking" will reduce the physical size of the ldf.  Shrinking can reclaim space from the "free" space in the ldf.
 The process to Shrink the transaction log files:

1.> Determine which databases are suitable candidates for shrinking the log file. 
DBCC SQLPERF(LOGSPACE);


2.> Perform a Full backup,
3.>  Next perform a transaction log backup and truncate the database.
 4.> Run DBCC ShrinkFile as shown below (please remember to leave growth so the ldf file is not growing, keep extra room in the ldf - this should be used to reduce the size log file that has grown way to far).  The example below will leave my transaction log at 100MB.
USE AutoSP_Config;
GO
DBCC SHRINKFILE (AutoSP_Config_log, 100);
GO

5.> Verify the ldf file has reduced in size.

Update 2014-02-05: I have Always on my separate SharePoint Reporting Services database.  The SP_RS_ServiceTempDB should be excluded from the availability groups.  Keep it in the simple recovery mode.  The logs grow extremely qu8ickly and it does not need to be in Full recovery mode.

Update 2014-04-25: Always on databases shrinking does not require you to remove the database from the AOAG, you can merely perform a full backup on the primary and then shrink the database. Note: Watch the backup chains you can break them especially if you have a 3rd party backup tool on SharePoint.

Update 2019-04-18:
Change a database from Full recovery model to simple using T-SQL and shrink the ldf file
Use [LiveDB]
            GO 
           ALTER DATABASE [LiveDB] SET RECOVERY SIMPLE WITH NO_WAIT
           GO 
           DBCC SHRINKFILE ('LiveDB',10) 
           GO 

Normal process is to change the db as follows 1) Set Simple Recovery 2) Shrink the Ldf (remember to size so it can handle a full backup cycle) 3) Set Full Recovery  4) Perform a Full backup (start the SQL backup chain again)