Showing posts with label Backup. Show all posts
Showing posts with label Backup. Show all posts

Friday 22 November 2019

Azure IaaS backup Service Notes

Azure has great cost calculation tooling.  DR can be pretty expensive is running but not being used.  Having the ability to either turn on or deploy a DR environment can make massive cost savings.

I often see organisation over spending Azure dollars, basically most cost reduction falls into 1 of these 3 groups:
  1. Eliminate waste - storage & service no longer used
  2. Improve utilisation - Oversized resources
  3. Improve billing options - long term agreements, Bring you own licence (BYOL), 

Apptio Cloudability is a useful tool for AWS and Azure cost savings.  Azure has good help and tooling for cost savings.

Azure IaaS Backup:
  • Recovery Services Vaults
  • Off site protection (Azure data center)
  • Secure
  • Encrypted (256-bit encryption at rest and in transit)
  • Azure VM's or VMS' woth SQL and on on-prem. VM's or Servers
  • Server OS supported: Windows 2019, 2016, 2012, 2008 (only x64)
  • SQL all the way back to SQL 2008 can be backup
  • Azure Pricing Calculator can help estimate backup costs
  1. Azure Backup Agent (MARS Agent), used to backup Files and folders.
  2. Azure Backup Server (trimmed down lightweight version of System Centre Data Protection Manager (DPM)), used for VM's, SQL, SharePoint, Exchange.
  3. Azure VM Backup, management done on Azure Portal to backup Azure VM's.
SQL Server in Azure VM backup, used to backup SQL databases on Azure IaaS VMs.

Backing up Azure VM's must be done to the same geo location as the vault.  It can't cross geo-locations.  Recovery has to be to a different location (verify this is correct?)
Note: "Backup Configuration" setting of the Vault properties can be set to "Geo-redundant"

Azure Recovery Vault Storage choice:
LRS - Local Redundacy Store - 3 local async copies
GRS - Globally Redundant - 2 async copies in the same data region with 3 local copies- so can keep in Europe for compliance, all 6 copies are in Europe.
Update Feb 2020: I think there is also a GZRS option, check if this has changed?

Naming is absolutely key and having a logical hierarchy within Resource Groups so it is easy to find resources.  I focus on naming my resource consistently however, I've always felt "Tags" have little or no purpose in smaller environments.  In larger environments tagging can be useful for cost management, recording maintenance, resource owners, creation dates.  Lately, I've been finding it useful to mark my resource with and Environment Tag to cover my Azure DTAP scenarios.  E..g., Production, Testing, Development.

Sunday 20 May 2018

Azure Helper

Azure Services - Replacing Data Centres with "Azure Virtual Networks"
There are so many different services that are constantly being changed and new services added.  This info looks at using an "Azure Virtual Network" to replace traditional data centres.  This "Azure Virtual Network" scenario covers VM's, Virtual Networking (subnets and VPN's), Resource Groups and backups (Recovery Service vaults).

Replacement of a traditional data centre
Tip:  Virtual Networks is a service offered by Azure.  "Azure Virtual Networks" is my term referring to using Azure to host VMS on Azure that happen to us the Virtual Networks service.
  1. Hierarchy is "VM" assigned to a "VNet" that is in a "Resource Group" on Azure tenant.
  2. VPN creates an encrypted secure tunnel between an office location (from the router/or a specific machine) directly to your VNet, allowing the office to use the VM's internal IP addresses.
  3. Use the "Azure AD Domain Service" rather than a DC on a VM or on-prem/data centre to connect machines together.
  4. "Recovery Service Vault" allows you can set up customised policies to back-up the entire VM's.
Azure SQL

T-SQL to create a new login and assign permissions to a specific database using SQL Server Management Studio:
Use master
CREATE LOGIN TestReader WITH PASSWORD = 'Password';

USE AzureTimesheetDB
CREATE USER TestReader FROM LOGIN TestReader;
EXEC sp_addrolemember 'db_datareader', 'TestReader';

Add rights to the TestReader user to run a specific Stored Proc:
USE AzureTimesheetDB;   
GRANT EXECUTE ON OBJECT::uspGetTimesheeyById  
    TO TestReader ;  
GO 

Azure Virtual Desktop/ Azure VDI

Microsoft Azure Virtual Desktop (AVD), previously called Windows Virtual Desktop (WVD) is Microsoft's Azures implementation of VDI (Virtual Desktop Infrastructure).  The most common VDI I came across is Citrix Virtual Apps and Desktops (CVAD).  VDI provides a user with a remote desktop instance so a user has their desktop apps and setup from anywhere without need a local laptop build. i.e. don't need to have a full laptop/client machine locally.  The machine is instead hosted as in AVD's case in an Azure Data Centre and the user logs in with their network credentials and gets their instance to work on.  No need to build laptops and easy to move laptop for the user.  Laptop is no longer a risk as the data is held in the data centre.  

Tags

I'm not a huge fan of tags, even in complex environments I find naming the resources and arranging the resource groups logically pays a high return.  One exception I use is I tag a common tag "Environment" on all my enterprise resources.  This allows me to quickly filter for production or test environment resource only with the Azure Portal.

updated: 2021/07/07 Azure Data Studio

Azure Data Studio can be used instead of SSMS to look at and query SQL database. 

Friday 13 March 2015

Capturing NFRs for SharePoint

Problem: Gathering Non Functional Requirements (NFRs) are always a tricky situation in IT projects.  This is because it is always difficult to estimate how the system will be used before you build it.  I often get business users stating extreme NFRs in the attempt to negotiate or show how world class they are (I generally think the opposite when hearing unreasonable NFR's). 

An example is a CIO at a fairly small NGO telling me the on-prem. SP 2010 infrastructure needs to be up all the time so an SLA of 99.99999.  This equates to 3.2 seconds downtime a year.  In reality, higher SLA's start to cost a lot of money.  SP2013 and SQL 2012 introduce Always On Availability Groups (AOAG) which helps improve SLA uptime but this costs in licensing infrastructure and management.  I need redundancy and the ability to deal with performance issues, so the smallest possible farm consists to 6 server, 2 for each layer in SP namely: WFE, App and SQL.

Here is an old post of SP2010 SLA's but still relevant today.

The key is gather you NFR's and ensure all your usage/applications on the production farm meet expected behaviours.  I have a checklist below.  Going thru the Microsoft's SP Boundaries, Limits and Thresholds document shall help highlight any issues.

The high level items I cover include the following topics:
  • Availability
  • Capacity
  • Compatibility (Browser, device, mobile)
  • Concurrency
  • Performance
  • Disaster Recovery (RTO, RPO)
  • Scalability
  • Search
  • Security
  • SLA

Capacity Example

Item
Day 1
Year 1
Year 3
Year 5
Site Collections
10
100
250
400
Database Size in GB
> than 1GB
490 GB
1220 GB
1960 GB
Search Index Size in GB
> than 1GB
120 GB
310 GB
490 GB
No of Content Databases
1
1
4
8
No of Search Items
10,000
10 Million
25 Million
40 Million
No of Index Partitions
1
1
3
4


Item
Day 1
Year 1
Year 2
Year 3
Number of Users
1,000
50,000
80,000
130,000

*Also calculate peak and average concurrency numbers

Average concurrency, for 20,000 users, the assumption is that 10% (2,000) users will be actively using the solution at the same time, and that 1% of the total user base (200) users will be actively making requests.  For for performance testing you are looking to handle 200 users without delays and a page response time of under 5 seconds.  Based on the simple guideline I've always used from Microsoft.

Peak concurrency depends on your situation for example the NFL playoffs game schedule in the when announced is not the simple 4 times the average concurrency tha would be suitable for most internal business applications.  Although this example may be considered a load spike rather than a peak concurrency.  

It also worth doing a usage distribution pattern for your users experience, so 80% may be light users, login, read 10 pages in your site and perform a single search with 1 minute gaps between interactions (wait times).  the remaining 20% perform a login, upload a 100kb document, view 10 pages and perform 2 searches.

RPO & RTO:

RPO - Max amount of lost data (in time)
RTO - Max time lost (rebuild farm and get the latest backups restored) to make the system operational again.   

SQL Server Sizing:
Option 1: work out the rows and bytes for storage and multiple by the number of rows and then add the tables together to get the size.
Option 2: Assume 100 bytes for each row, count the number of rows and get the storage requirements.

More Info:
https://technet.microsoft.com/en-us/library/ff758647.aspx

Sunday 20 April 2014

Backup and Restore Site Collection in SP2013

Overview:  I have seen several customers us Backup and restore to help speed up the development process and have the ability to deploy between DTAP environments.  So the basic premise is create the site collection on dev/a and use backup and restore to promote the site collection including customisations and code in the next environment.

SharePoint 2010:  In SP2010 this worked assuming the env you are going to has a higher patch level than the source environment.  So if you went from SP2010 + cu to SP2010 + SP1 in production backing up and restoring the site collection works.  The trick was to package all assets into the site collection and to ensure all environments were on the same edition/patch level (or at least the destination farm was patched to a higher level than the source farm).

SP2013:  You can use PS backup and restore to move site collections but it is further restricted.  The source and destination environment need to be the same edition.  My issue is I can't move a troublesome production environment back to UAT as my UAT has been patched and is a later/newer version of SP2013 on prem.

I learnt this when restoring the site collection from 15.0.4481.1005 (SP2013 + Mar CU) on the source and trying to go to 15.0.4569.1000 (SP203 + SP1) y destination farm.

Restore-SPSite : 0x80070003
At C:\Users\SP_install\AppData\Local\Temp\5ae5fd1c-86ac-4032-8975-c739f39b6f36.ps1:3 char:1
+ Restore-SPSite –Identity "http://uat.futurerailway.org" –path "C:\Software\Deplo ...
+ CategoryInfo : InvalidData: (Microsoft.Share...dletRestoreSite:SPCmdletRestoreSite) [Restore-SPSite], DirectoryNotFoundException + FullyQualifiedErrorId : Microsoft.SharePoint.PowerShell.SPCmdletRestoreSite  

Conclusion:  To move Site collection between farms or to different content databases, the SP farms need to be using the exact same version of SP.

Thursday 2 January 2014

Understanding SQL backups on SharePoint databases using the Full recovery mode

Overview:  This post looks at reducing the footprint of the ldf file.  SharePoint related databases with using the Full recovery mode keep all transaction since the last "differential". It explains how SQL is affected by backups such as SQL backups, SP backups and 3rd party backup tools (both SP backup and SQL backup tools).

This post does not discuss why all your databases are in full recovery mode or at competing backup products.  It also contains steps to truncate and then shrink the size of the transaction log.

Note: Shrink a ldf file for it to regrow each week/cycle is bad practice.  The only time to shrink is when the log has unused transactions that are already covered by backups.

Background
  • Using the full recovery model allows you to restore to a specific point in time.
  • A Full backup is referred to as your "base of the differential".
  • A "copy-only" backup cannot be used as a "base of the differential", this becomes important when there are multiple providers backing up SQL databases.
  • After a full backup is taken, "differentials" differential backups can be taken.  Differentials are all changes to your database since the last "base of the differential".  They are cumulative so obviously, they grow bigger for each subsequent differential backup until a new "base of the differential" is taken.
  • To restore, you need the "base of the differential" (last full backup) and the latest "differential" backup.
  • You can also back up the transaction logs (in effect the ldf).  These need to be restored in sequential order, so you need all log file backups.
  • If you still have your database you can produce the "tail-log" backup this will allow you to restore to any point in time.
  • Every backup get a "Log Sequence Number" (LSN), this allows the restore to make a chain of backup for restore purposes.  This chain can be broken using 3rd party tools or switching the database in the simple recovery mode. 
  • A confusing set of terminology is "Shrinking" and "Truncating" that are closely related.  You may notice an ldf file has got extremely large, if you are performing full backups on a scheduled basis this is a good size to keep the ldf at.  You don't want to grow ldf files on the fly as it is extremely resource intensive.  However say your log file has not been purging/removing transactions within a cycle, then you may have a completely oversized ldf file.  In this scenario, you want to perform a full backup and truncate your logs.  This remove committed transactions but the unused records are marked an available to be used again.  You can now perform a "shrink" to reduce the size of the ldf file again, you don't want ldf's growing every cycle so don't schedule the shrinking.
  • "Truncating" is marking committed transactions in the lfd as "free" or available for the writing new transactions too.
  • "Shrinking" will reduce the physical size of the ldf.  Shrinking can reclaim space from the "free" space in the ldf.
 The process to Shrink the transaction log files:

1.> Determine which databases are suitable candidates for shrinking the log file. 
DBCC SQLPERF(LOGSPACE);


2.> Perform a Full backup,
3.>  Next perform a transaction log backup and truncate the database.
 4.> Run DBCC ShrinkFile as shown below (please remember to leave growth so the ldf file is not growing, keep extra room in the ldf - this should be used to reduce the size log file that has grown way to far).  The example below will leave my transaction log at 100MB.
USE AutoSP_Config;
GO
DBCC SHRINKFILE (AutoSP_Config_log, 100);
GO

5.> Verify the ldf file has reduced in size.

Update 2014-02-05: I have Always on my separate SharePoint Reporting Services database.  The SP_RS_ServiceTempDB should be excluded from the availability groups.  Keep it in the simple recovery mode.  The logs grow extremely qu8ickly and it does not need to be in Full recovery mode.

Update 2014-04-25: Always on databases shrinking does not require you to remove the database from the AOAG, you can merely perform a full backup on the primary and then shrink the database. Note: Watch the backup chains you can break them especially if you have a 3rd party backup tool on SharePoint.

Update 2019-04-18:
Change a database from Full recovery model to simple using T-SQL and shrink the ldf file
Use [LiveDB]
            GO 
           ALTER DATABASE [LiveDB] SET RECOVERY SIMPLE WITH NO_WAIT
           GO 
           DBCC SHRINKFILE ('LiveDB',10) 
           GO 

Normal process is to change the db as follows 1) Set Simple Recovery 2) Shrink the Ldf (remember to size so it can handle a full backup cycle) 3) Set Full Recovery  4) Perform a Full backup (start the SQL backup chain again) 

Wednesday 28 March 2012

Deploying Code on Shared SharePoint Infrastructure

Problem: How do you deploy code onto your SharePoint Production farm for Enterprises. 

Initial Hypothesis: There are various options for deploying solutions on SP2010.  The stricter you are the better the farm will cope with multiple additional coded solutions.  Layout clear guidelines on:
GAC vs Bin vs Sandboxes
Scoping - minimal scoping
Upgrades - how do you upgrade, use wsp, number dll's, upgrade wsps,.
Backup and restoring site templates and moving data.  The farm you are moving a backed up Site collection to needs to be newer than the source SP version.
Customisation - do you try OOTB, are your designs vetted by an architect, SPD, InfoPath, 3rd party web parts/templates.

Resolution:
Use test, QA and production environment and update each env using the same steps/documentation.  Consider AvePoint Migration Manager (need to review this) for deploying InfoPath, assets including wsp's and code.
Ensure all architects and developers know the developer Standards for the SP2010 farm.  This should include how to deploy code, as a general rule make everything need to be deploy-able via PowerShell scripts, it's safer and can be retracted.  Developer standards need to mention; what SP features (MMS, UPS, Excel Services, ect.) can be used and what tools such as InfoPath and SPD are allowed, how are these changes synchronised between your environments/farms.  Should customisation be packaged in wsp's?  It takes time but could mean a more stable farm.  Should also cover when to code, ensuring code such as elevate privileges is used correctly, the list goes on and is a mixture of SP best practises and implementing them pragmatically for you business.

PS to deploy wsp's

Tools to look at are:
  1. ROSS from RepliWeb (Attunity) &
  2. Avepoint (Deployment Manager) has a module for deploying between environments.

Tuesday 10 January 2012

Migrate MOSS site collection to SP2010

Problem: A common upgrade scenario I get in the role as a solutions architect is to move a site collection from a MOSS site to SP2010.
Initial Hypothesis: Do a side-by-side upgrade. Leave the existing site collection (or lock) it until the upgrade has the site collection on the new infrastructure. Don't go straight to production and I recommend doing the full upgrade on a standalone VM dev machine before even getting to test or production; this ensure the process works, is test and is repeatable.

Resolution: My suggested approach is to backup the content database containing the site collection, perform a “preupgradecheck and migrate it onto SP2010 on a dev machine. Next perform a backup of the site collection using CA or PS> as shown here Create the Installation Plan document that deploys the wsp’s and restore the site collection.  You can now move where the site collection to an existing content database.

More Info:
http://technet.microsoft.com/en-us/library/ee748617.aspx
http://blog.sharepointsite.co.uk/search?updated-max=2012-01-02T05:21:00Z&max-results=7

Monday 2 January 2012

Restoring dev machine nightmare

Problem: I have a been battling for days with my User Profile service, somewhere along the line it got corrupted with a CU. 

Hypothesis: I tried permissions, restarting the service, re-provisioning the service, nothing seemed to fix the issue.  Ultimately, I rolled back my VM to a clean install and used Spencer Harbar's guide after installing the August (re-issued) CU 2011.  This fix my User profile service.  Just extremely glad this is a dev machine.  Technet has an good article also on setting up the UPS.
Lastly I'm needed to redeploy all my custom code and restore the site collections to get me data.  The site collections have been transferred using backups can be done thru: 1) CA or 2) PowerShell
Restore the site collection using PowerShell PS> Restore-SPSite -Identity http://test.demo.dev/sites/kbtest -Path "C:\Users\Administrator\Desktop\KB\kbsitecollection.bak" -force
More Info:

PS to backup a Site Collection:
Backup-SPSite -Identity -Path [-Force] [-NoSiteLock] [-UseSqlSnapshot] [-Verbose]
PS> Backup-SPSite -Identity http://demo.dev -Path c:\\backup\SCbackup.bak -force -ea Stop

Note: You can't backup and restore a site collection to the same content database.  If you are using the same web application to restore the site collection, add another content database.

Note: You can only have 1 site collection restored per content database.  I restored a site collection, deleted it as I wanted it with a different name and could not restore.  When I perform the restore using PS I get the gollowing error: "Restore-SPSite : The operation that you are attempting to perform cannot be completed successfully. No content databases in the web application were available to store your site collection. The existing content databases may have reached the maximum number of site collections, or be set to read-only, or be offline, or may already contain a copy of this site collection. Create another content database for the Web application and then try the operation again."

Repair the orphaned items for the offending content database:
PS> $cdb = Get-SPContentDatabase "ContentDB_Portal"
PS> $cdb.Repair($true)
PS> $cdb.Update()


More Info:
Take control of the restored content database
http://technet.microsoft.com/en-us/library/cc288148(v=office.12)