Showing posts with label PaaS. Show all posts
Showing posts with label PaaS. Show all posts

Tuesday 20 July 2021

Elastic Database Client Library for client database segregation on Azure PaaS for SaaS

Overview:  Provide a logically separated database instance for each client on my SaaS solution.  Using the Elastic Database client library from Microsoft on Azure PaaS services provides logical security separation of data, performance is on a per customer, and easy scalability.  Use Azure SQL Elastic Pools (HA redundant secondary database, built in DR).  Also add temporal tables for a full history of all transactions.

PoC:

  1. Provision 3 databases - A Shard Map Manager (Catalogue) database and 2 client databases (tenants/shards).
  2. Add shard related metadata to the Catelogue database for each of these databases.
  3. Create below Three service principals in Azure AD: 
    • Management Service Principal: for creating shard metadata structure.  A database contained user in Shard Map Manager db and each tenant db.
    • Access Service Principal: to load shard mapping at application side.  A database contained user in Shard Map Manager db.
    • Connection Service Principal: to connect tenant database.  Database contained user in each tenant db.


                        Management Service Principal: for creating shard metadata structure

CREATE USER [shard-map-admin-sp] FROM EXTERNAL PROVIDER

EXEC sp_addrolemember N'db_ddladmin', N'shard-map-admin-sp'

EXEC sp_addrolemember N'db_datareader', N'shard-map-admin-sp'

EXEC sp_addrolemember N'db_datawriter', N'shard-map-admin-sp'

GRANT EXECUTE TO [shard-map-admin-sp]

 

Access Service Principal: to load shard mapping at application side

CREATE USER [shard-map-access-sp] FROM EXTERNAL PROVIDER

EXEC sp_addrolemember N'db_datareader', N'shard-map-access-sp'

GRANT EXECUTE TO [shard-map-access-sp]

                                                         

Connection Service Principal: to connect client/tenant database

CREATE USER [tenant-connection-sp] FROM EXTERNAL PROVIDER

EXEC sp_addrolemember N'db_datareader', N'tenant-connection-sp'

EXEC sp_addrolemember N'db_datawriter', N'tenant-connection-sp'

EXEC sp_addrolemember N'db_ddladmin', N'tenant-connection-sp'

GRANT EXECUTE TO [tenant-connection-sp]


References:

https://docs.microsoft.com/en-us/azure/azure-sql/database/elastic-database-client-library


Sunday 29 November 2020

Azure SQL Basic Options Summary

OverviewAzure SQL is incredible.  There are a lot of options when choosing how to host database and performance good.  "handles patching, backups, replication, failure detection, underlying potential hardware, software or network failures, deploying bug fixes, failovers, database upgrades, and other maintenance tasks", from Microsoft Docs and Azure SQL.

Azure SQL is the PaaS database service that does the same functions as SQL Server did for us for many years as the workhorse for many organisations.  Microsoft initially only offered creating VM's and then installing SQL Server on-prem.   Azure SQL is Microsoft's PaaS SQL database as a Service offering on the cloud.  Azure SQL is a fully managed platform for SQL databases that Microsoft patches managed backups and high availability.  All the features that are available in the on-prem. Edition are also built into Azure SQL with minor exceptions.  I also like that the minimum SLA provide by Azure SQL is 99.99%.

Three SQL Azure PaaS Basic Options:

  1. Single Database - This is a single isolate database with it's own guaranteed CPU, memory and storage.
  2. Elastic Pool - Collection of single isolate databases that share DTUs (CPU, Memory & I/O) or Virtual Cores.
  3. Manage Instance - You mange a set of databases, with guaranteed resources.  Similar to IaaS with SQL installed but Microsoft manage more parts for me.  Can only purchase using Virtual Core model (No DTU option).
Thoughts: Managed Instances recommend up to 100TB but can go higher.  Individual databases under elastic pools or single databases are limited to a respectable 4 TB.

Two Purchasing Options:

  1. DTU - A single metric that Microsoft use to calculate CPU, memory and I/O.  
  2. Virtual Cores - Allows you to choose you hardware/infrastructure.  One can optimise more memory than CPU ratio over the generalist DTU option.
Thoughts:  I prefer the DTU approach for SaaS and greenfield projects.  I generally only consider Virtual Cores, if I a have migrated on-prem. SQL onto a Managed Instance or for big workloads virtual cores can work out cheaper if the load is consistent.  There are exceptions but that is my general rule for choosing the best purchasing option.

Three Tiers:

  1. General Business/Standard (There is also a lower Basic Level)
  2. Business Critical/Premium
  3. Hyperscale

Backups

Point in time backups are automatically stored for 7 to 35 days (default is 7 days), protected using TDE, full, differential and transaction log backups are used to point in time recovery.  The backups are stored in blob storage RA-GRS (meaning in the primary region and all the read-only backups are stored in a secondary Azure region).  £ copies of the data in the active Azure Zone and 3 read only copies of the data.

Long Term Retention backups can be kept for 10 years, these are only full backups.  The smallest retention is full backups retained for each weeks full backup.  LTR is in preview available for Managed Instances.

Azure Defender for SQL 

Monitors SQL database servers checking vulnerability assessments (best practice recommendations) and Advance Threat Protection which monitors traffic for abnormal behavior.

Checklist:

  1. Only valid IP's can directly access the database, Deny public Access,
  2. AAD security credentials, use service principals
  3. Advanced Threat Protection has real time monitoring of logs and configuration (it also scans for vulnerabilities), 
  4. Default is to have encryption in transit (TLS 1.2) and encryption at rest (TDE) - don't change,
  5. Use Dynamic data masking inside the db instance for sensitive data e.g. credit cards
  6. Turn on SQL auditing,

Note: Elastic Database Jobs (same as SQL Agent Jobs).

Azure offers MySQL, Postgre and MariaDB as hosted PaaS offerings. 

Note: The Azure SQL PaaS Service does not support the filestream datatype : use varbinary or references to blobs. 

Sunday 23 August 2020

AWS vs Azure vs GCP Comparison

Overview:  I predominately use Azure & Microsoft for all my cloud services.  


I have installed multiple SharePoint farms and setups on AWS EC2 instances and I'm currently preparing for the Cloud Practitioner AWS exam.  I have used Google for authentication, SaaS nut not as a IaaS offering.  I'm also a huge fan of Heroku which is great for PaaS and I used this to host my game built for Facebook games.  I've also seen IBM's cloud offering a few years ago.  For me it is too niche and not as feature rich.  So basically I understand Azure's offering well so I found this comparison pretty useful.

My Thoughts:  The contenders:  I really like Heroku for it's simplicity.  I feel for a small Indie developer or company, Heroku has a good free and cheap simple billing options.  GCP, I really can't comment from a good position of knowledge but from what I've used, I like GCP.  GCP is the third biggest Cloud provider.  As a large organisation, I'd only consider the big three: Microsoft Azure, AWS, and GCP to be our cloud partner.  Multi-cloud partner is a demand from some organisations, it's truely extra expensive.  Azure uses ARM templates and has many options for provisioing the IAAS, PaaS offerings.  If you are thinking multi-cloud consider Terraform by Hashicorp for IaC.  There is also the concept of Click-Ops (sic) which allows you to click thru the UI of the management of the Cloud services to get the the desired architecture, this is fine for simple small architecture but you can't do this at any scale or agility and it's super error prone.  Click-ops is more a joke term for the laziest way to build infrastructure and we need to make it sound modern.  IBM's offering, well if you are a partner, you cloud go with this option but it is aimed more a large business partners.  IBM's cloud is IaaS focused, with some PaaS offerings but once again I'm not an expert.

AWS, has always been really easy to use.  It is big and complex like Azure with many offerings.  Basically, I'd choose AWS if the organisation was already using it and the people in the org know have experience with AWS.  AWS originally was aimed at the B2C/startup market but was first to market at scale.

Azure, so in my world Azure and O365 feel like the dominant player but the diagram below provides a great insight into the relative size of the Cloud infrastructure market.  Azure SaaS offering O365/M365 is also huge and hosted on Azure.   Azure security is well thought out and their thinking on BYOK and geo-location appear to be important.  Microsoft offer Arm templates and DSC for configuring environments, they are also adding Bicep which is an abstract layer that will run ARM templates into Azure.

There is good resource CloudWars.co that goes into looking at the various cloud providers.  My current take away is Amazon is the biggest player in the IaaS field.  Azure has IaaS, a large PaaS offering and a massive SaaS (including Dynamics and O365) offering (Amazon has no equivalent).  I am focused on PaaS solutions for my customers so as to remove the infrastructure and process overheads of IaaS.

Off the top of my head reasons for moving and objections I hear for the cloud regardless of platform:

Why Cloud:

  1. Save Money
  2. More Secure
  3. Fast Delivery/More Agile/Easy to scale/Increase business resilience
  4. Eco-friendly

Challenges:

  1. Lack budget
  2. Spiraling costs
  3. CAPEX model vs OPEX is business common norm that some business find difficult to switch
  4. Resources/Skills
  5. Believe security is an issue/Don't trust the Cloud
  6. Migrate legacy apps (for me don't move to the cloud unless you get significant advantage)


Tuesday 3 July 2018

Visual Studio Code - Azure functions using local Node and JavaScript Problem Solving

Problem:  When I deploy my function locally, I get warnings.  The local server still works but fixing the Functions Worker Runtime issue.


The Visual Studio Terminal when debugging locally output the following warnings:

"your worker runtime is not set. As of 2.0.1-beta.26 a worker runtime setting is required.
Please run `func settings add FUNCTIONS_WORKER_RUNTIME <option>` or add FUNCTIONS_WORKER_RUNTIME to your local.settings.json
Available options: dotnet, node, java"

Later it show that Java is the default language:
"Could not configure language worker java."

Resolution:
Open the local.settings.json file and add the "FUNCTIONS_WORKER_RUNTIME" value as shown below.  "FUNCTIONS_WORKER_RUNTIME": "node",



Problem:  When debugging my Azure Function using Node, I get the following error: "Cannot connect to the runtime process, timeout after 10000 ms - (reason: Cannot connect to the target:)." and the debugger stops.  the local server still serves the application but I cannot debug.


Resolution: After reading many blogs, I was not having any joy.  Finally, I remembered that I have changed the port number the previous day.  In desperation, I changed the port number in the launch.json file back to it's original and the debugger started working.


Wednesday 6 June 2018

SharePoint Online Replacement Patterns in Diagrams

Overview: This Post highlights my default position for achieving Common SharePoint solutions using SharePoint Online, flow and Azure Functions.


Matt Wade has a great resource on the components making up O365.
https://app.jumpto365.com/