Sunday 6 December 2015

Smoke Testing SharePoint using Selenium IDE

Overview:  In SharePoint we need to retest code often as we make incremental changes.  Basic smoke testing is useful in that is allows you a certain degree of confidence that a bug has not crept into your latest deployment.

Most projects have varying degrees of control to ensure bugs do not cause unexpected behaviour and on the more advanced practices is unit testing and coded UI tests.  The unit tests are tricky with all the new SharePoint development methods.  Jasmine is a JavaScript testing framework and check out the @SPDoctor (Bill) for basic testing information.  Unit testing SP is difficult as the revs end up test SP and not the changes.  A lot of the code is UI driven which is hard to unit test.  I have previously written about code UI testing, MTM as part of the MS test and continuous testing.  And part of this is Selenium WebDriver.  I've used it once on a large project and it was awesome.  Now as you go into production you normally will do some manual smoke testing to check the deployment.

This post looks at automating smoke test.

There are various tools for recording smoke test or you can do the manual eyeball approach favoured my most SP project still.  I have used Powershell (both to gen http requests and to control IE).  MTM is good but it requires buy into the whole MTM process.  Personally I like Selenium IDE for Firefox.  It offers recording, and the capturing functionality is miles ahead of anything else.  These Selenium generate test can be export and used with Visual Studio or build into TFS or on a project I used TeamCity to run automated continuous build and integration.

Note:  Selenium IDE is the recording piece and Selenium WebDriver is the heavy duty real testing integration part.

Get Started:
1. Download Selenium IDE (make sure you get the Selenium IDE, I'm using version 2.9.0) 
2. Understand the UI and capabilities (YouTube basic Selenium IDE videos are great)
3. Install Selenium IDE on Firefox as shown below:




























4. Launch Selenium IDE using FireFox








5. Record and run Selenium test (2 minute short video providing a Selenium IDE recorded test against a SharePoint Online Team Site).

Another option is to control IE using PowerShell
PS> $ie=New-Object -com internetexplorer.application  #open ie
PS> $ie.Navigate("http://www.radimaging.co.uk/Pages/default.aspx")


Sunday 1 November 2015

SharePoint 2016 Preview - Min Role

Overview:  SharePoint 2016 Min Roles before the public betas.  Things are going to change and this only obviously applies to SP2016 Preview on-prem.

"Min Role is basically a topology assistance service"

Installation:  The services are pretty similar to SP2013 and most of the PowerShell from AutoSPInstaller will work with SP2016.  Excel services has been removed.  The difference in the UI install is the ability to have server install min roles for a multi-server.  Min roles are collections of services installed on a machine.  I'd lean towards installing full/custom roles and then converting the specific servers to the specific min roles.

Using the min roles, SharePoint can verify the roles are in compliance and can be used to managed the farm.  In effect you'd need all 4 roles to have all the services on a SharePoint farm and you shall need 2 instances of each for high availability.  So excluding SQL you shall need 8 VM's for a High Availability (HA) farm.  If you install extra services on a min role server, SharePoint timer job shall stop the service on a daily basis (not proven).

There are 4 server roles:
  1. Web Front end,
  2. Distributed Cache (also has a witness/quorum),
  3. Application,
  4. Search
Servers can be changed from Custom to specific min roles or the other way around. 

Note:  FULL High Availaibility
Min 2 times each role (extra quorum for dist cache roles) plus 2 more for Search HA, so 11 for core SP.  Plus SQL AOAG 3 servers, plus WCA/OOS 2 + SP workflow + K2/AvePoint + SSIS


Sunday 18 October 2015

SharePoint 2016 Preview (Public Beta 1) on-prem Notes

Disclaimer:  This notes I made from workshops, what I have heard and the Unity Conference in Amsterdam 12-14 Oct 2015.  A lot of this information is from the workshop with Neil Hodgkinson and Spencer Harare), this is my takeaway summary.

Notes for Sp2016:
  • Same HW req as 2013.  Farm servers min still Mem 12-16 CPU x64 1x4 Disk, Disk 80GB
  • Pre-reqs: Win 2012 R2, Win Mgmt Framework 3.0 gives us DSC, .NET 4.5.2 ..., DSC can be used to pre-bake the VM image.
  • Same DB rules as recommended by MS, loosing dbs from 2013;  No new DBs, need SQL 2012 or 2014 (except project Server DB which is not part of SP).
  • Need Win 2012 standard or higher, not web edition, also dev can support windows 10
  • Still no support for VMWare dynamic memory
  • End-point encryption for SMTP
 - Upgrades and Patching
  • No Foundation edition, SP2013 found to Sp2016 Server
  • Path SP2013 > SP2016
  • SC must be in 15 mode to upgrade
  • Service Apps need to go SP2010 > SP2013 > SP2016
  • SP2010 to SP2016 need to go to 2013 RTM baseline
  • Changed patching, smaller packages and fewer restarts
  • PSConfig not locking farm, can run multiple psconfigs and lower/zero down time patching (with HA farms)
 - Roles & Services
  • Consider moving low impact services onto the traditional WFE role, keep the long running/batch processing (Crawl, search, MTS,  et al) on the app servers.
  • WFE (Access services, SSS, Subscription Services, UPS)
  • Distributed Cache has a quorum so need 3 not 2 for NA.
  • Health Analyser rule for min role enforcement:  Puts min role in the correct state.
  • Min Role does not manage the search topology
  • Watch, switching min roles as index would be lost unless it is replicates (2 instances of each index)
  • Services in Farm overrides the starting of services in the Min Roles,  so can never start "Request Management" in the "Services in Farm" but still use "Distributed Cache" min role.
  • Can always switch min roles "Convert server role" or create custom roles (watch as needs multiple instances to keep running and index could disappear).
 - Key Thresholds for 2016:
  • CDB sizing
  • 100K SC per CDB
  • Max file size 10GB
  • Search index up to 50 million items
  - User Profile Sync:
  • UPS Sync (FIM) is not Microsoft Identitiy Management (MIM)
  • 2 modes: Active Directory Import (light weight, not useful for most large enterprise clients, e.g. Can't import pics or use BCS) or MIM 2016
  • AD Import: faster than 2013, can only use AD, no profile picture. 
  • MIM 2016 was FIM - Standalone product, only using the sync engine part for SharePoint (free if only use this service, does need Win 2012 and SQL Server licence)
  • Using MIM management agent map AD properties to SP user Profile properties
  • Syncing is driven by MIM not by SharePoint (UPS sync)
- What's New:
  • Post to yammer from SP2016 doc library
  • Improved integration
  • Image and Video Preview (changed)
  • Doc Lib accessibility (improved keyboard short cuts, VI user experience improved)
  • SC creation faster on SP Site template using SPSite.Copy
  • Project Server is part of the SP binaries/install, project server using it's own project db and adds 4/5 tables to the content database.  Project Server affects 3 DB (Project db, content db and config db)
  • Save and share email attachments in SP2016


- Release Dates
  • Preview = Beta 1 Aug 2015
  • Beta 2 RC  = +- Nov/Dec 2015
  • RTM Q1/Q2 2016

Thursday 15 October 2015

Hybrid Search 2016 Notes


Hybrid Search SP2016 (Also applies to SP2013):  Mixing on-prem and SPO results
  • Search can add all crawls into a single index within SharePoint online (historically we have had to use Search Federation to try combine result sets).  So historically we use search federation whereby there are multiple indexes that are the shown on a single page this approach for search result federation is refereed to as "Search-time merging".
Federated Search provide by MOSS, SP 2010 and SP 2013












  • The Index is held on SPO.  The new model is refereed to as "Index-time merging".
Single result set from multiple farms (joined the indexes into an SPO index). vNext hybrid search.
  • Crawls done on SP2016, 2013 and maybe 2010 are pushed into an Azure queue which in turn is combined onto the SPO index (I believe the Index is encrypted as rest in SPO)
  • Dir Sync is required between on prem AD and Azure AD 
UnityConnect Conference 2015 Amsterdam Search session - Architecture of Hybrid Search

Saturday 10 October 2015

User Profile Service Application Notes




The People Picker Control in SharePoint 2013 & 2016
  • The People Picker control is used to find user, groups and claims.  You'll notice the people picker UI has 2 queries (you can normally see the lag), firstly the people picker looks at the UserInfo list in the local site collection and then it also looks at Active Directory (this is where the custom claims providers need to be written if you don't use AD).  A common misconception is thinking the people picker users the UPA to look for users, it does not use the UPA directly.  Indirectly, if the UserInfo is populated the People Picker gets the user from the 1st lookup.
  • The UserInfo table is populated with the initial stub when a user is added to the security or the 1st time the login to the site collection.  
  • The UserInfo list is not updated if you add the user via an AD group or if you pick the user in a list within a people picker column.
  • The UserInfo on each site collection gets an initial stub of information in the list and a timer job at the web application level looks for a list of users to perform a sync (update more into for the user in the userinfo table), this runs every 5 minutes by default.  Job is: User Profile to SharePoint Quick Sync.


Thursday 8 October 2015

Performance Testing Check List

Rough Notes

Performance Factors Checklist:

  1. Geography and networking - Australia and Africa users always tend to have issues regardless of the centralized SharePoint farms they are accessing.  Poor networking especially in remote satellite offices is not a SharePoint issue but as enterprise grow these are the pains they need to work out.
  2. Security and network design
  3. Usage patterns (Working with documents in the UK after noon when the US comes online may have a peak usage rate 5 times higher than when Australia starts in the morning)
  4. Functionality (Search is heavier than displaying a static web page)
  5. Application design (too many web services calls when 1 can do all the calls) and application implementation.  The application needs to meet the functional and non-functional requirements.  Coding error creep in, implementation needs to meet the design and unnecessary code improved.
  6. Platform (2 server VM farm will only get a set load no matter how well it is tweeked)


Figure out where the bottlenecks are and how big the impact is:

  • SQL is a common performance bottle neck, check you have optimized SQL
  • Monitor all server in your farm, the random, throw more RAM at the solution is pointless most of the time.
  • Size of lists in SP
  • Size of Content
  • Physical architecture
  • Search - design, how many items are crawled are they all needed, multiple search farms, what is going on, AD groups preferential to individual users
  • WAN latency Testing
  • Baseline SP testing
  • Peak loads and what is affected (Monitor and prove where the issues are).  Concurrency and usage patterns.
  • Archiving, document retention, document versioning, unwanted general storage area, recycle bin, can we clean up
  • Optimize highly used page - for example if your home page leverages search and the search service and farm is therefore under load, more the search component into a cached solution or pull off the home page.
  • Auditing logs - Do you log enough, do you log too much can you pull them out of the Content DB and store them elsewhere.
  • Encryption (TDE, SSL, Devices)
  • Web page basics: image size, CDNs, JavaScript optimisation, sprites

Sunday 13 September 2015

SharePoint 2013 Workflow options - notes

Overview:  There are a lot of workflow options and each of the solutions lend themselves favorably to different circumstances.  In this post I look at the more common options around workflow for SharePoint.  The 3 options I'm exploring are: K2 blackperl, Nintex and SP2013 WorkFlow Manager.  Also note that existing SP2010 workflow still exists and is an option if your business has workflows on the platform already or you have this skill set available.  There are other products but these are the main stream options.

So each of these products has their place and suit different organisations.  This post is my opinion and I am not a workflow expert and show my thoughts on when I would favor 1 of the approaches.

Licencing:  Workflow Manager does not have the licencing costs.  Nintex has a server and CAL licencing model and K2 has a server licencing model.

Skills:  what are your existing in-house skills.  If you already have K2 or Nintex expertise it makes these products far more attractive.

Size:  How big is your organisation, how complex are the workflows, how many workflows and how often do they change shall influence the workflow option to select.

SharePoint 2013 Workflow Manager
SharePoint 2013 introduces an new standalone workflow engine based on Windows Workflow Manager 1.0 using .NET 4.5.  In the SP 2013 world, Office Web Apps (OWA) and Workflow Manager runs as a service separate from SharePoint 2013.
  • SharePoint Designer 2013
  • Ideal for simple or medium complexity workflow processes
  • Limited to a pre-defined set of activities and actions
  • Relatively quick and easy to configure
  • Custom workflow development through Visual Studio
  • Can implement state-machine workflows
  • Supports custom actions/activities
  • Supports debugging
  • Ideal for modelling complex processes
  • Requires a developer
  • Workflow Manager
  • High Density and Multi-Tenancy
  • Elastic Scale
  • Fully Declarative Authoring
  • REST and Service Bus Messaging

Nintex
  • On-premise and cloud workflows – but cloud workflows do not allow custom actions
  • Nintex uses the SharePoint workflow engine
  • Easy to create Nintex workflows (good tooling) but not so easy to upgrade and maintain if complex – they require a proper dev environment if workflows require changing
  • Tight coupling with SharePoint – so upgrades need to be planned. Some workflows have broken after upgrade.
  • Can create custom activities but these are limited to constraints imposed by Nintex design surface
  • More suited to State machine workflows using reusable custom modules and user defined actions.
  • Nintex uses its own database which you will need intimate knowledge of when it comes to performance issues.

K2
K2 – technology agnostic – best suited if SharePoint is only a part of your technology snapshot, some folks consider K2 a BPM product.
Pros:
  • Off box WF hosting:  Allows for increasing the number of blackperl servers and no resource overlap, flexible licencing model as it is server based
  • Well tried and tested workflow engine
  • Good reporting and troubleshooting
  • Excellent SOA layer (SmartObjects) with multiple products.  This is more an EA feature as it can be a great way to create an SOA.  Allows API to connect to custom SQL, CRM, SAP, Web Services.
  • Proven advanced tooling, good visual tooling (not as good as Nintex IMHO)
Cons:
  • Cost is relatively high, support costs are extensive, need to pay for dev and pre-prod licence
  • Not based on the latest MS workflow engine
  • Not easy for end users to build WF (despite marketing noise)
  • Setup and monitoring is specialised and will require advanced K2 expertise
    Difficult to back out of product
  • Tooling requires training and breaking out of OOTB features requires a high level of expertise and dependency on K2
  • Support tended to be patchy with technical knowledge
Updated: 2017-11-03.  Possible Extranet facing blackpearl Infrastructure design
Summary:
K2 is a good product if you need to build an SOA layer for integration, are prepared to install correctly (cost) and maintain.  You shall need dedicated workflow people to create the workflows.  So in the right circumstances it has it’s place.

Updated 11 December 2019:
Microsoft Power Automation (formerly Microsoft Flow) is the default workflow option when working with the Microsoft Power Platform (Power Apps, Power Automation and Power BI).  O365, SPO and D365 can also use Microsoft Power Automation.  Azures Logic Apps is also a good option especially if your application is C#/Azure based and not within one of the SaaS Azure offerings.

Sunday 6 September 2015

Content Type Hub and MMS Notes

This topic has been well covered and this post merely calls out items I feel are worth knowing.

Notes:


You can configure more than 1 MMS assigned to a Web Application. 
You can have multiple CTHub that your sites can subscribe to.  Think it is up to 4.

After publishing, new SC will get the CT's pretty quick whereas the updates to existing CT in the downstream SC take some time to permeate out.  Timer job on the subscriber Web Application goes thru SC and gets the latest CT (it will over write any local changes).  No Granular push out (check with JJ).

Terminology for MMS (Term Group, Term Sets and Terms):

Thursday 20 August 2015

Non Functional Testing for SharePoint

Overview:  Functional Requirements are the business requirements that the business define for the application being built.  Non-functional testing is concerned with performance, reliability, scalability, recovery, load,  security and usability testing.  For SharePoint it is a good idea to test this at a platform level and then verify the individual application non functional testing is appropriate.

SharePoint Non Functional Testing:
All of these test should be performed against your various SharePoint platforms and will dictate the SLA's offered to the business using SharePoint as a service.  Baseline testing is a good idea as the differences can be used to determine the efficiency of the individual application being created.

Proxies:
Fiddler is my favourite (other tools for capturing web traffic Charles, BurpSuite, WireShark and you can use the developer tools shipped with the browsers). tcpdump, goPacket are awesome for network monitoring.

Use Fiddler to:
  • Observe traffic (http/https requests, headers,..)
  • Replay sessions, 
  • Evaluate performance,
  • Set break point
A common misconception is the function of performance vs load testing.  

Performance is primarily concerns with looking at typical user usage scenarios and see how long each page takes to load.  So a recorded script with wait time between recorded calls is useful  It's also worth looking at the same page with minimal data and also with large amounts of data.  

Load Test involves recording the standard users interaction (ensure some users query heavy and some light amounts of data), there are no wait times.  The norm is to multiple this concurrent number of users by 100 to know the amount of users the farm business application can support.  e.g. a concurrent user load of 100 users where the performance is acceptable means the farm should handle 10,000 users (100 users times 100).  You run the 100 users in a steady state for a few hours.

Stress testing is similar to load testing but you keep stepping up the number of concurrent users until the SharePoint farm starts running out of resources and throttling requests.  Once the system degrades and performance is not acceptable is pretty much how many users the application on the SharePoint farm are supported.  So if the performance throttles at 170 users, the farm can handle 17, 000 normal usage users.

References:
http://www.guru99.com/non-functional-testing.html

Sunday 16 August 2015

FedAuth Notes for Problem Solving

Overview:  These are my notes on FedAuth relating to SharePoint 2013.
SharePoint (SP) 2013 uses Claim Based Authentication (CBA).  In this example, I am looking at SiteMinder (a CA product) as the Federation Service (this is the equivalent to ADFS (Active Directory Federation Service) as the Identity Provider (IdP)).
Basic Flow of SP CBA Authentication:
  1. SP looks for a FedAuth cookie; if  there is no FedAuth cookie for the users, it shall redirect the user to login via the IdP (AAD/SiteMinder/ADFS et al.). 
  2. The IdP returns a valid SAML token to SharePoint's STS.
  3. The STS generated a FEDAUTH cookie for the user to hold the current users session lifespan state (to keep the user log in).  The user holds the STS token not the SAML token.  The FedAuth in is a pointer to the SAML token held in the SharePoint Token Cache.
The default behaviour of SharePoint is to store the FEDAUTH cookie on the user’s disk, with a fixed expiration date. The expiration of the FEDAUTH cooking can be for a fixed time or a sliding session (if the user interacts with SP, the SP session is extended).  FedAuth can be stored on the Disk (default or in memory (not persisted between browser close downs). 

Note:  Changing where the cookie is stored affects the way the user shall login and effects Office application login such as Word.  Test thoroughly before changing)

Note:  Watch the IdP providers expiration policy vs what you set up in SP.  As an example, you could remove a user from the IdP, however, the session is still persisted and the user can still interact with SharePoint.   From MSDN "Make sure that the value of the LogonTokenCacheExpirationWindow property is always less than the SAML token lifetime; otherwise, you'll see a loop whenever a user tries to access your SharePoint web application and keeps being redirected back to the token issuer." 

Note: Closing a browser window with the FEDAuth stored to Disk does not invalidate the SharePoint session.  So the FedAuth shall persist when IE is close.  However, by keeping the session lifespan/FedAuth lifetime relatively short, the  to be relatively short (think less than an hour) your security is better.
Note: Change from FedAuth to session based sessions should taken lightly, Office products need to be thoroughly tested and generally won't work seamlessly.

Updated 11 Feb 2019: rtFA cookie
"The root Federation Authentication (rtFA) cookie is used across all of SharePoint Online and the rtFA cookie is used to authenticate the user silently without a prompt.  So when moving from OneDrive to SPO or Admin sites, the user does not get additional prompts for login.  When a user signs out of SharePoint Online, the rtFA cookie is deleted."

References:
SharePoint Authentication and Session Management
https://msdn.microsoft.com/en-us/library/hh446526.aspx
Remote Authentication for SharePoint Online (RTFA)
Why IE and Office work together in SP
Adding, removing SP claims and managing security using claims  and NB! also
Logout of SharePoint
 NB!  http://blog.robgarrett.com/2013/05/06/sharepoint-authentication-and-session-management/
Check Cookie TimeOut; set by formula:
FedAuth LifeTime = IdP endpoint SAML token lifetime Duration - STS LogonTokenCacheExpirationWindow

Update 04 Feb 2016: screenshot for clarity:


My blog post on changing from FedAuth to session based cookies.  The post also shows how to examine the cookies for Internet Explorer (IE).

Sunday 9 August 2015

Request Management

I hate Request Management (RM) and I believe it is going away in SP 2016.

Issue I have seen with RM enabled:
  • Tenant Admin API can't provision Site Collections (point to the WFE's and it works)
  • Workflow sometimes don't start
  • Removing RM improved performance on an Extranet
  • Distributed Cache - One client had Distributed Cache intermittent issues.  User had to re-authenticate using STS.  As the RM was hit 1st and using distributed cache and then picking 1 of several WFE's, the issue was twice as likely to occur.  Used the NLB to direct traffic to the WFE's and the number of re-auths halved.

Saturday 11 July 2015

Machine Translation Service for SP2013

Overview:  I have never use Machine Translation Services (MTS) and this post is my discovery of the Service.  These are my summarised notes.
  • Setup a MTS on the farm
  • Configure MTS on the farm
Notes
  • The Server/servers running the MTS need internet access as the need to connect to Microsoft Translator.
  • Used to translate word documents, html documents and plain text.
  • MTS has a single database
  • There is a length restriction of translations so long word document won't translate.  This can be amend in your MTS configuration but 500,000 characters is the default max translation length.
  • Full APIs: Server side Object model, or CSOM and REST API's. 
More Info:
https://technet.microsoft.com/en-gb/library/jj553772.aspx
http://blogs.technet.com/b/wbaer/archive/2012/11/12/introduction-to-machine-translation-services-in-sharepoint-2013.aspx
http://blogs.msdn.com/b/mvpawardprogram/archive/2013/08/05/overview-of-sharepoint-2013-multilingual-features.aspx
 

Saturday 4 July 2015

Provisioing Site Collections on-prem using the Tenant Admin API

Problem: Ability to provision Site Collections without using Server Side code.  Use CSOM and the Tenant Admin APIs.  This is a follow on the post: Provisioning Site Collections using CSOM (read it 1st).  Thanks to Sachin Khade, Frank M (check) and Alex N R (check) has given me his time to understand this: https://sachinkhade.wordpress.com/
I have reduced the Tenant Admin process into the least amount of steps that works.


The steps are:
Perform on an existing Web Application
Run the PS Script below:
  1. Create SC using a team site site template STS#0
  2. Set the AdministratorSite Type = TenantAdministrator
  3. Add ProxyLibrary that add the TenantAdmin dll
  4. Attach the Proxy to the existing Web Application
  5. Enable SelfServiceCreation on the Web Application
  6. IISReset
  • Using the C# console create new site collections using the Tenant Admin API
PS Script

========
# The first section contains the variables you need to specify based on your needs
$webapp =  get-spwebapplication http://radimaging.co.uk:555 # My Web application (already exists)
$url = "http://radimaging.co.uk:555/sites/msotenantcontext" # Tenant Admin Site Collection used for provisioing (does not exist)
$WebsiteName = "Tenant Admin"
$WebsiteDesc = "Tenant Admin Site"
# better to use the site template "tenantadmin#0" using the team site site template "sts#0" causes
# an error msg (SubscriptionId can't be null), both work but you get less admin options # for provisioning.
$Template = "STS#0" 
$PrimaryLogin = "radimaging\psmith"
$PrimaryDisplay = "Paul smith"
$PrimaryEmail = paul.smith@radimaging.com
# Create a site collection and top level website
New-SPSite -Url $url -Name $WebsiteName –Description $WebsiteDesc -Template $Template -OwnerAlias $PrimaryLogin –OwnerEmail $PrimaryEmail
$web = Get-SPWeb $url
$web.CreateDefaultAssociatedGroups($web.site.owner,$web.site.secondaryowner,"")
$web.Dispose()


#Set the TenantAdmin SC
$site = get-spsite -Identity $url
$site.AdministrationSiteType = [Microsoft.SharePoint.SPAdministrationSiteType]::TenantAdministration
$newProxyLibrary = New-Object "Microsoft.SharePoint.Administration.SPClientCallableProxyLibrary"
$newProxyLibrary.AssemblyName = "Microsoft.Online.SharePoint.Dedicated.TenantAdmin.ServerStub, Version=15.0.0.0, Culture=neutral, PublicKeyToken=71e9bce111e9429c"
$newProxyLibrary.SupportAppAuthentication = $true
$webapp.ClientCallableSettings.ProxyLibraries.Add($newProxyLibrary)
$webapp.SelfServiceSiteCreationEnabled=$True
$webapp.Update()
Write-Host "Successfully added TenantAdmin ServerStub to ClientCallableProxyLibrary."
# Reset the memory of the web application
Write-Host "IISReset..."   
Restart-Service W3SVC,WAS -force
Write-Host "IISReset complete."


As always check out this post from Vesa Vuronen:
https://blogs.msdn.microsoft.com/vesku/2014/06/09/provisioning-site-collections-using-sp-app-model-in-on-premises-with-just-csom/

Sunday 31 May 2015

Provisioing Site Collections using CSOM - Tenant Admin API

Overview:  This post looks at provisioning site collections using CSOM.  You can also provision site collections for SharePoint using PowerShell or any Server side object model. 
Points to Note:
Programmatically you can provisioning new site collections using CSOM using 2 approached namely:
  1. Tenant Admin API
  2. Http Post method (mimic the SharePoint UI for creating a site collection)
Note: Neither approach allows you to specify the Content Database to connect to, you shall need to manage the CDB you site collection goes into using the round robin site collection OOTB method for on-prem SP. 
Note: Tenant Admin API does not allow the Quota template to be specified.  See the FAQ section in this post.
Note: Tenant Admin API requires the April 2014 SP CU or later
Note: the Search Service Application needs to be configured to handle multi-tenancy to work correctly.  As do other the Service Applications using partitions to support multi-tenancy.  If you already have existing running farm, the change is a considerable effort.  The SA need to be created in partition mode and cannot be amended after creation (you will need to re-create the service Application).
Note: Using the Tenant Admin API for SC creation - you don't get the usual SP groups such as owner, contributor and visitor - you need to manually create them.
Note: I don't believe you can use the Publishing Site Template using the Tenant Admin API.
The Tenant Admin Site Collection can reside on the same or another Web Application where the site collections shall be provisioned.  Each Tenant Admin Site Collection (has it's own site template 'tenantadmin#0') has a SubscriptionId (Subscription Group) and when using the Tenant Site collection to create a new site collection, the new site collection is given the SubscriptionId for the group i.e. you can't specify the SubscriptionId declaratively).

Outline of steps to setup the Tenant Admin API:
  1. Service Application need to be configured in partition mode (important SSA are: search, UPA, MMS, BCS, SSS, there are more).
  2. Enabling remote site collection creation using CSOM on the Web Application
  3. Enable AdministrationSiteType property from a site collection to "TenantAdministration"
  4. Enable SelfServiceSiteCreationEnabled on the Web Application
  5. Set Up Tenant Admin for Multi Tenancy/setup subscription
More Information:
Multi-tenancy/Site subscriber explained by Bill Baer
Spencer Harbar's Rational Guide to Multi-tenancy is a useful resource
General guidance for hosters in SharePoint Server 2013 provides insight into Multi-Tenancy
https://technet.microsoft.com/en-us/library/dn659286.aspx
Scenarios where multi-tenancy potentially shall be used:
  1. O365/SharePoint Online
  2. SPO-D
  3. Hosting companies
  4. Government implementations such as G-Cloud
  5. Large Enterprise (only with extreme requirements)
Notes on HNSC using Tenant Admin API:
  • When creating a host name site collection with managed paths e.g. http://acme.com/sites/daffy, you need to create the corresponding root hnsc for the routing to work i.e. http://acme.com.
  • Creating a hnsc with a path is consider creating a hnsc not a path based site collection or a combination of the naming.
  • The manage path /sites/ which is already created works as it is already setup.  If you want another managed path you need to configure this separately.
Quota Limits:
Quota max storage size and code points are parameters in the CSOM Tenant Admin API, they don't set these values and you cannot set the quota templates using CSOM.  You only 2 options at this point in time is to use the UI and apply a template, not really an option for customers with hundreds, thousands or tens of thousands of site collections or you use PowerShell/ the Server side object model.

Permissions:
To be able to provision a new site collection, the account used to provision shall need to have contribute rights (it feels low and simple to me but that is the min) or higher on the Tenant Admin Site Collection.

Troubleshooting Tenant Admin Site Collection Provisioning:  Update 2017-06-28
I had tremendous problems with site collections not being completely created using CSOM and the tenant admin API on a new server that was provisioned by our engineering department.  There are a couple of IIS and farm setting you will want to review should you get this issue and our amazing team figured this out so it is not my credit.  Gonzalo, Uzzey and Anthony with thanks!


Change IIS timeouts on the WFE's and SP farm configs, this made the site collections provision completely correctly.

Monday 18 May 2015

Remote Event Receivers Basics

Overview:  Historically we use Full Trust Code (FTC) within SharePoint 2010 and MOSS to have the ability to handle events in SharePoint such as an item being added to a list.  Since SP 2013 and going forward, full trust code is not the preferred approach and Microsoft of Remote Event Receivers (RER).

Notes:

  1. RER are web services that implement the iRemoteEventReceiver interface and a remote web server (WS). RER's WS can be hosted on an IIS application server (including Azure).
  2. Asynchronous events supported with different code approaches: 1. Synchronously call on the before and after events i.e. ItemUpdating, ItemUpdated 2. Asynchronous on past event/-ed events i.e. ItemUpdating only.
  3. RER's can be fired on SP events such as list item changes, BCS events, 

Complete and publish!!


Thursday 7 May 2015

SharePoint 2016 Points from the Ignite Conference

6 May 2015


SharePoint 2016 new features (from the Ignite conference 06 May 2015)
http://www.learningsharepoint.com/2015/05/07/sharepoint-2016-new-features-and-enhancements/

 
Notes:
  1. Office Graph and Delve are important in SP2016.
  2. MS are releasing a search add-on for SP2013 later in 2016, this will be part of SP2016 (vNext). The add-on stores the index on o365. allows seemless indexing of on-prem and O365 using AD to AAD sync.

Download all the Ignite Videos and Slides:
https://gallery.technet.microsoft.com/all-the-Ignite-Videos-and-b952f5ac

Sunday 26 April 2015

Code Reviews for SharePoint

Overview:  Customisation in SharePoint takes different forms and having suitable environments to test code in before setting it free in production is essential.  This post looks at various types of customization and how to code review.  As a solutions architect and when I was running the Application Development CoE for a large multinational having standards and a code review checklist help immensely.  Improving code quality and finding issues early reduces the cost of building applications so code reviews are a good idea.

There are several automated tools for performing code review that target different application platforms (think FTC in SP2010 vs App Model in SP2013).  When automating the tools, it is good to select the templates/rules that match your organisation and maturity.  Ensure you customise the rules so they not reporting issues when in fact these are your standards (an example is naming in FxCop differs from the SharePoint code naming conventions used by different businesses).

Note: The code review requires depends on CSOM, FTC or JavaScript.  Depending on what is being created/built will require different code review.

There are several automation tools that can help identify poor quality code early within the development process.  Like peer reviews, these tools can help developers implement their code in the correct manner.

Note:  Define your coding standards, have up to date architecture diagrams for architects and have the rules when and what features your developers can use.  It's fairly common for outsourcing companies to build a solution to find out you don't allow the technology they have built with.  I remember an InfoPath based solution coming into my app development center a few years ago and they could not understand that the organisation would not merely turn on InfoPath.

Note: A lot of the tools we previously used in SP 2010 for FTC solutions are not relevant to SP 2013, namely SPDisposeCheck.

Code Review Tooling Options:
  1. Visual Studio
  2. FxCop (Config in VS so it runs with the same rule set as SPCAF)
  3. StyleCop (Config in VS so it runs with the same rule set as SPCAF - forces enforcement of code style at design time)
  4. SPDisposeCheck (SP 2010 only, don't use in SP2013 even for FTC solutions)
  5. MSOCAF
  6. SPCAF (SharePoint Code Analysis Framework)
  7. Black Duck - Build into CI/CD pipeline checks for open source software and identifies potential security issues and highlights licencing concerns.


The 3 areas where code reviews can be performed are:
  1. Developer at run time (think Visual Studio)
  2. Continuous delivery (think gated check-ins)
  3. Formal Code reviews (think solutions architect and quality manager) 
Manually reviewing code is better than nothing (automate where possible) and some basic rules and guidance is provided below.

Summary:
Code reviews improve maintainability, pick up bugs, ensure efficient code, code that shall run in production, improve security, performance and reduce the total cost of ownership.  Automated tools are worth considering and the top tool for me is SPCAF.  Do code review early, often and automate.

JavaScript Code Review Checklist:

1.> Project Structure - js into script folder in the solution file (group images, css, js and file types so the projects are easy to understand and consistent in layout)
2.> use strict directive on all pages "use strict";
3.> Always use Javascript namespaces - avoid conflicts
4.> Move hard coding to constants at the top of the file, not single use meaningful info like undefined in.  Move declarations to the top.



5.> Only used approved frameworks like jquery, notify if any other frameworks are used.
6.> Commenting.  Ensure method names tell coders what the method is performing.  Add comments that explain the method.  Don't be afraid to add value by adding inline comments. 
7.> Display friendly messages to the users if something goes wrong and add error handling to tracking /logging such as console.log() or log to ULS from an app using the provide JS api or log to a common logging mechanism.
8.> Single spacing  (no flower potting)
9.> Remove commented out code/unused comment out calls etc.
10.> Always end your switch statements with a default statement.
11.> Ensure coding standard are consistent consider using http://www.jslint.com/
12.> Code adheres to your agreed coding standards and example is http://google-styleguide.googlecode.com/svn/trunk/javascriptguide.xml

C# Coding Standards for SharePoint
This is a checklist, the recommendations need to be matched to your business and some scenarios such as complied C# for PowerShell plugin won’t use all the items in this checklist.
  1. Have you followed the Enterprise design guidelines, branding guidelines and coding standards.
  2. Have you used the Commenting standards e.g. http://msdn.microsoft.com/en-us/library/b2s063f7.aspx
  3. Avoid declaring inline literal strings
  4. Check empty string using length e.g. if (email.Length()=0) don't use if (email.Empty || email = "")
  5. Use StringBuilder for concatenation don’t keep appending strings
  6. Return Empty array rather than null
  7. Methods must be short and focused.  Method names must be meaningful
  8. Use method Overloading, not different names for the same method.  Try keep Classes small i..e under 500 lines.  If larger use #Regions to split up the code.  Pass objects into Methods rather than multiple variables if more than 6 parameters.
  9. Enumerators should be used where possible, code is more understandable and options are easy to reuse.
  10. Only import namespaces you need and dlls.  Split code into separate assemblies and use company standard naming with appropriate namespaces naming.
  11. Make helper functions i.e. don't rewrite code several times - refactor
  12. Open connections (SQL and SharePoint) as late as possible and ensure you wrap in error handling and dispose of connections in the finally statement
  13. Reuse core code libraries (ensure commonly re-used functionality is add into core libraries cross-cutting concerns/logging/ email)
  14. Use exception Management/Try catch.  Try catch must try catch specific errors and lastly catch all errors.  No business logic must rely on using catch statements.  Don't throw exceptions within exceptions.  Catch errors as specifically as possible, die gracefully and appropriately, log the errors using the CoE code core block that puts exceptions in the farms ULS and event viewer.   And potentiall the enterprise logging platform.
  15. Dispose of SPSite and SPWeb Server site objects where appropriate. Run http://code.msdn.microsoft.com/SPDisposeCheck before deployment
  16. Run stylecop and code analysis on code regularly and before deployment
  17. Your code is x64 bit compiled. 
Have a common code/core code library that deals with cross cutting concerns, logging, caching etc.

using Microsoft.Practices.ServiceLocation;
using Microsoft.Practices.SharePoint.Common.ServiceLocation;
using Microsoft.Practices.SharePoint.Common.Logging;
ILogger _logger = SharePointServiceLocator.GetCurrent().GetInstance<ILogger>();
Exception ex = new ApplicationException("This is my test exception");
_logger.LogToOperations(ex); 
Security in C# and SP
  1. Plain text passwords are not in stored Web.config, Machine.config, or any files that contain configuration settings. 
  2. Input surfaces such as application pages, site pages, web parts and other customizations perform client and server side validation to protect from cross-site scripting (XSS) and SQL injection. 
  3. Minimal use of elevated privileges to interact with SharePoint objects. 
  4. Sensitive data is not stored in URLs, unencrypted cookies in hidden form fields, query strings or with code. 

HTML/CSS

Section 508 US Standard to ensure federal agencies 
WCAG 2.1 compliant standard should be adhered to and will cover: Jaws/Browser testing, screen zooms and brail readers.  WCAG 2.2 is due out in 2021.
RWD testing e.g. Mobile/Phone testing
SEO

SQL Standards (Establish SQL standards), a small example is:

  1. No spacing in naming objects
  2. Do not use reserves words in SQL
  3. Name tables in sigular e.g. "Patient" not "Patients"
  4. No Underscore in table naming and use Camel case e.g. "PatientResult", underscores are fine in column and Store proc naming.  
  5. Do not prefix tables e.g. "tbl_Patient" or "tblPatient" 
  6. Prefix view with "vw" e.g. vwPatientHistory
  7. Boolean columns prefixed with "Is" e.g. IsActive
  8. Stored Procs prefix with "usp" not "sp".  E.g. uspDeletePatient, use the format usp_Verb_Noun
  9. Prefix functions with ufn 
  10. label foreign key using the prefix fk and follow the format fkTableColumn e.g. fkPatientId 
  11. Make your -SQL readable not on 1 line.  Use line-breaks, no empty lines and indent spacing to make the code readable.
  12. How to comment must be standardised

This list goes on but as a starting point...  Pls post if you feel anything else is relevant.

Saturday 25 April 2015

DevOps Tooling

DevOps Tooling Notes

DevOps Tooling is broken down into the following areas, note the tools often overlap in function.  The list is not exhaustive but these are the more common tools I have come across.
  1. Version Control: TFS, Git, SVN, ...
  2. Bug Tracking: ServiceNow, Jira, ZenDesk
  3. Continuous Testing: Selenium, Jasmin or Mocha or Unit.js (JavaScript testing), NUnit, Web Tests (Visual Studio), SpecFlow
  4. Continuous Integration (CI)TeamCity, Jenkins, Azure DevOps (bigger) 
  5. Configuration Management and Deployment:  Puppet, Chef, ANSIBLE, SALT  (all installed on Linux, obviously work on Windows environments)
  6. Containers: Docker, Kubernetes, Microsoft Containers. I think the Azure AKS is pretty much containers for Azure now.
  7. Other:  PowerShell, VMWare, HyperV
RESTful API Tooling
  1. Swagger - awesome.  Swagger is a set of tools that help document, build and test your API  (Your API conforms to the OpenAPI specification or Swagger specification).  Great way to get a contract for users of the API early on.  Updated 2019/11/25Link to Swagger post
  2. Swagger UI, Swagger Integrator,...
  3. Apiary - UI to create an API and publish with mocks.  I prefer Swagger or on simple projects APIM.
  4. API Management (APIM) - flexible Azure service for bring together multiple API securely.  Same as MuleSoft.  Can import OpenAPI's v2 or v3 to create a hosted API.  Can mock and built in test tool.
  5. RAML is an alternative to Swagger and Apiary (never used)
  6. Blueprint - API documentation tool.  Pretty simple and nice results.
  7. Postman - send http requests to the API.  Postman is a REST client useful to check your API.  This is my main tool for testing, exploring REST based API's.  
  8. SoapUI - if working with SOAP/XML.
  9. Slate - API documentation - I always use OAS/OpenAPI/Swagger.
  10. Fiddler - I'm old school and still love Fiddler and it's capabilities.  Fiddler is a great HTTP debugger.  
  11. BURP - an HTTP debugger to review traffic.  I've used BURP for security testing and it is great for API debugging.  
  12. Charles is another HTTP debugger (never used).
  13. cURL - Cmd line to test API's using HTTP, separate exe to run on Windows, Windows 10 has cURL built in.
  14. Visual Studio
  15. Wireshark - Over the years I have needed packet sniffing to fix issues and always go to Wireshark, I used the tool in the 90's but it had a different name.  Extremely useful for issues relating to firewalls, especially when an environment reacts differently to another working DTAP environment.
  16. Tcpdump is another packet sniffer
Testing:
http://www.incyclesoftware.com/2014/02/executing-selenium-ui-tests-release-management/

More Info:
http://blog.sharepointsite.co.uk/2014/02/devops-and-sharepoint.html
http://www.networkworld.com/article/2172097/virtualization/puppet-vs--chef-vs--ansible-vs--salt.html
http://blog.sharepointsite.co.uk/2013/11/iac-presentation-for-sharepoint.html


Sunday 19 April 2015

PhoneGap and SharePoint

For Mobile Start HTML5 Mobile web App, then PhoneGap (wrapper to interact with devices),
Xamarin, recompiles to each platform, lastly write for each native platform thin iOS/objective C for Apple. PhoneGap and Xamarin are comparable with respect to performance and have trade-offs based on code reuse, developer skill set, and integration into standard developer tool sets

Idea: Start by building HTML5 sites with a responsive design then leverage these HTML5, CSS and JS assets hooking into SharePoint and extend with device capabilities using Hybrid framework (PhoneGap)

FeatureHTML5PhoneGap
Web view Yes Yes
Audio/Video files YesYes
Location YesYes
Local storage YesYes
CameraNoYes
AccelerometerNoYes






Yes
Notifications (local, alert, push)
No
Yes
Compass NoYes
Native UINoNo
Access to full API/SDK No No

Also see:
https://xamarin.com/

Saturday 11 April 2015

Empty Developer Dashboard in SP2013

Problem: No data is showing up on the developer dashboard in SharePoint 2013.

Initial Hypothesis:  My initial thoughts where around the SSL cert issue on the VM or potentially Fiddler causing the dev dashboard to be empty.  after looking at the ULS a good developer could see the Usage and Health Data Collection Service Application was not working.

http://www.wictorwilen.se/sharepoint-2013-developer-dashboard-shows-no-data-issue

Resolution: Once the Usage SSA was configured, the dashboard started working.

Thursday 19 March 2015

Identity Providers for SharePoint

Overview:  I have worked with and evaluated a couple of Services and Federation Server products.  Here is an old pot of setting up claims, at the bottom I have some thoughts on different services/server products.
Background: SAML and WS-Federation protocols are standard Single Sign-On protocols, the following version exist:
  • SAML 1.0, SAML 1.1, SAML 2.0
  • WS-Federation
Security Assertion Markup Language (SAML) is an XML-based protocol for exchanging authentication and authorization data between security domains.
SAML enables web-based authentication scenarios including cross-domain single sign-on (SSO).  SAML is a token representing a principal that normally represents a user but can represent an app.
  
Other terms to understand:
  • Identity provider (IdP) think ADFS/Azure ACS,
  • Service provider (SP) is the SAML consumer in our context this is SharePoint but this can be an MVC app.
  • Realm
OOTB SP2010 and SP2013 support SAML1.1 not SAML2.0, you can write custom code or use a Federation Server like ADFS to convert the SAML2.0 so it will work with SP.
Identity Provider (IdP) Products:
  1. Microsoft ADFS
  2. Ping Federate
  3. ThinkTexture Identity Server
  4. CA-SiteMinder
  5. IBM Tivoli (CAM)
  6. Oracle Access Manager
  7. ComponentSpace
  8. Shibboleth
  9. RSA Federated Identity Manager
  10. Entrust GetAccess
 IdP Services:
  1. Azure Active Directory
  2. LiveId
  3. Google
  4. Facebook
  5. LinkedIn
  6. Yahoo
This list is in no way exhaustive, pls post if you feel I am missing any providers.

Friday 13 March 2015

Capturing NFRs for SharePoint

Problem: Gathering Non Functional Requirements (NFRs) are always a tricky situation in IT projects.  This is because it is always difficult to estimate how the system will be used before you build it.  I often get business users stating extreme NFRs in the attempt to negotiate or show how world class they are (I generally think the opposite when hearing unreasonable NFR's). 

An example is a CIO at a fairly small NGO telling me the on-prem. SP 2010 infrastructure needs to be up all the time so an SLA of 99.99999.  This equates to 3.2 seconds downtime a year.  In reality, higher SLA's start to cost a lot of money.  SP2013 and SQL 2012 introduce Always On Availability Groups (AOAG) which helps improve SLA uptime but this costs in licensing infrastructure and management.  I need redundancy and the ability to deal with performance issues, so the smallest possible farm consists to 6 server, 2 for each layer in SP namely: WFE, App and SQL.

Here is an old post of SP2010 SLA's but still relevant today.

The key is gather you NFR's and ensure all your usage/applications on the production farm meet expected behaviours.  I have a checklist below.  Going thru the Microsoft's SP Boundaries, Limits and Thresholds document shall help highlight any issues.

The high level items I cover include the following topics:
  • Availability
  • Capacity
  • Compatibility (Browser, device, mobile)
  • Concurrency
  • Performance
  • Disaster Recovery (RTO, RPO)
  • Scalability
  • Search
  • Security
  • SLA

Capacity Example

Item
Day 1
Year 1
Year 3
Year 5
Site Collections
10
100
250
400
Database Size in GB
> than 1GB
490 GB
1220 GB
1960 GB
Search Index Size in GB
> than 1GB
120 GB
310 GB
490 GB
No of Content Databases
1
1
4
8
No of Search Items
10,000
10 Million
25 Million
40 Million
No of Index Partitions
1
1
3
4


Item
Day 1
Year 1
Year 2
Year 3
Number of Users
1,000
50,000
80,000
130,000

*Also calculate peak and average concurrency numbers

Average concurrency, for 20,000 users, the assumption is that 10% (2,000) users will be actively using the solution at the same time, and that 1% of the total user base (200) users will be actively making requests.  For for performance testing you are looking to handle 200 users without delays and a page response time of under 5 seconds.  Based on the simple guideline I've always used from Microsoft.

Peak concurrency depends on your situation for example the NFL playoffs game schedule in the when announced is not the simple 4 times the average concurrency tha would be suitable for most internal business applications.  Although this example may be considered a load spike rather than a peak concurrency.  

It also worth doing a usage distribution pattern for your users experience, so 80% may be light users, login, read 10 pages in your site and perform a single search with 1 minute gaps between interactions (wait times).  the remaining 20% perform a login, upload a 100kb document, view 10 pages and perform 2 searches.

RPO & RTO:

RPO - Max amount of lost data (in time)
RTO - Max time lost (rebuild farm and get the latest backups restored) to make the system operational again.   

SQL Server Sizing:
Option 1: work out the rows and bytes for storage and multiple by the number of rows and then add the tables together to get the size.
Option 2: Assume 100 bytes for each row, count the number of rows and get the storage requirements.

More Info:
https://technet.microsoft.com/en-us/library/ff758647.aspx