Friday 29 November 2019

Redis Cache

Caching in Microsoft technologies has many forms and options.  You can have server side cache, client side cache or downstream caching such as proxy servers.

Server Cache in .NET:
  • in-Memory store: This type of server cache historically was very useful for maintaining state and having users tied to a collection of objects/knowledge and is extremely fast.  You need to think about multiple instances, or how to route traffic.  So you can tie a user to use static routing to ensure user Joe's session always goes to server 3.  But is server 3 dies, you loose all the cache data.  
  • Local storage cache works well for larger amounts of data that are too big for memory cache but as they are storage based are much slower.
  • Shared/ centralized caching, allows shared cache, a good example is using SQL server for caching and the user can go to any front end web server and the cache is pulled from SQL.  Allows for failure, no need to performed fixed user routing of requests (sticky sessions).  It is slower as cache is pulled from a central service that in turn goes into SQL to retrieve the data.
  • Azure Redis cache is a form a of Shared Caching.  The cache is better if used in memory like Redis cache does.  A Redis Cache is a service that is optimized for performance and allows stateless caching to work across server instances.  So while it needs to travel over your network in Azure, it is not as fast as local cache but extremely fast for centralized caching.  Redis is pretty advanced and has clustering and replication to ensure performance is good.
https://docs.microsoft.com/en-us/azure/architecture/best-practices/caching

Client Side Cache:
  • Browser can hold information in session but it is not secure or at least less secure than server side cache.
  • CDN's are a way of retrieving data from an optimized central store but useful for static files and data.
  • Adding headers to HTTP requests allow for downstream caching.  For example, I offer a REST API (e.g. C# Web API) that return a feature product that change hourly.  I could easily set expiry caching to 10 min.  The product is changed every 10 minutes for each user and added to the users local cache.  So if an average user is on for 20 minutes, they only to 2 of their 10 requests to the REST API, the other 8 calls are served locally.  Couple with Server caching, the requests to the server can be fast, with very few calls to the back-end database yet the relatively static data causes far less demand on the web service.
  • Validation Caching - is client caching, that stores data locally, but a request is sent to the server to ask if there is a change, if there is a change send back the new data.  If the data has no changed, a 304 response is sent and the browser uses the previously stored local cached data.
Client and Server (Redis) side Caching on Azure
Note:  The diagram only outlines client caching referring to the browser or end device.  Using HTTP headers, you can also set down stream proxy caching.  This is also generally referred to as client caching.

Firefox has a nice short cut to check you client cache is working.  Open Firefox, and request the URL that shall cache a resource on the local client machine.  then in the URL type in "" and check what is being served up locally.


Basic Caching Options:
1. Client Cache (local) - saves on network call and serve side processing.  Can hold client specific static data.
2. Client Cache (proxy) - saves on server side processing.  Can only hold generic static data.
3. Server side Cache (Redis) - saves on computing resources by not having to connect to data sources on every request.  Useful for static share data.



Friday 22 November 2019

Azure IaaS backup Service Notes

Azure has great cost calculation tooling.  DR can be pretty expensive is running but not being used.  Having the ability to either turn on or deploy a DR environment can make massive cost savings.

I often see organisation over spending Azure dollars, basically most cost reduction falls into 1 of these 3 groups:
  1. Eliminate waste - storage & service no longer used
  2. Improve utilisation - Oversized resources
  3. Improve billing options - long term agreements, Bring you own licence (BYOL), 

Apptio Cloudability is a useful tool for AWS and Azure cost savings.  Azure has good help and tooling for cost savings.

Azure IaaS Backup:
  • Recovery Services Vaults
  • Off site protection (Azure data center)
  • Secure
  • Encrypted (256-bit encryption at rest and in transit)
  • Azure VM's or VMS' woth SQL and on on-prem. VM's or Servers
  • Server OS supported: Windows 2019, 2016, 2012, 2008 (only x64)
  • SQL all the way back to SQL 2008 can be backup
  • Azure Pricing Calculator can help estimate backup costs
  1. Azure Backup Agent (MARS Agent), used to backup Files and folders.
  2. Azure Backup Server (trimmed down lightweight version of System Centre Data Protection Manager (DPM)), used for VM's, SQL, SharePoint, Exchange.
  3. Azure VM Backup, management done on Azure Portal to backup Azure VM's.
SQL Server in Azure VM backup, used to backup SQL databases on Azure IaaS VMs.

Backing up Azure VM's must be done to the same geo location as the vault.  It can't cross geo-locations.  Recovery has to be to a different location (verify this is correct?)
Note: "Backup Configuration" setting of the Vault properties can be set to "Geo-redundant"

Azure Recovery Vault Storage choice:
LRS - Local Redundacy Store - 3 local async copies
GRS - Globally Redundant - 2 async copies in the same data region with 3 local copies- so can keep in Europe for compliance, all 6 copies are in Europe.
Update Feb 2020: I think there is also a GZRS option, check if this has changed?

Naming is absolutely key and having a logical hierarchy within Resource Groups so it is easy to find resources.  I focus on naming my resource consistently however, I've always felt "Tags" have little or no purpose in smaller environments.  In larger environments tagging can be useful for cost management, recording maintenance, resource owners, creation dates.  Lately, I've been finding it useful to mark my resource with and Environment Tag to cover my Azure DTAP scenarios.  E..g., Production, Testing, Development.

Tuesday 12 November 2019

Microsoft Information Protection


Check out my earlier Post on AIP (feb 2019)

End-to-end life-cycle for encrypting files using Azure Information Protection (AIP)


Use "Unified Labeling" to create labels



Note: Encrypting stops SharePoint being able to look into the content of the file.  The labels and name are still search but not the content of the file.  eDiscovery, Search, co-authoring don't work on AIP encrypted documents.

Cloud App Security (MCAS) Screen Shot


Sunday 10 November 2019

OpenAPI Tooling working with WebAPI and APIM Notes

Editor.swagger.io is a great tool for building OAS files.  The Swagger editor is easy to use and has a preview for changes.

VS Code is a great IDE for working with OpenAPI  specification 2.0 and 3.0 files (also know and Swagger specification).  These 3 extensions are a good idea for working with a OpenAPI specification file.


Spotlight also has an editor which is nice.  Takes a little bit of getting use to, but make complex API design first easier.

Sunday 27 October 2019

Google Tag Manger

 "Tag Manager gives you the ability to add and update your own tags for conversion tracking, site analytics, remarketing, and more. There are nearly endless ways to track activity across your sites and apps, and the intuitive design lets you change tags whenever you want." https://marketingplatform.google.com/about/tag-manager/benefits/

Tag manager is useful for adding 3rd party tags/scripts.


Saturday 19 October 2019

Evidence Based Management for Scrum

Evidence Base Management (EBM) is an management framework that uses evidence and metrics.  Execs & Middle mgmt don't get insight and normal forecasts.  EBM framework to help management move the company.  The mgmt can use evidence to drive the business forward.  Inspect and adapt using evidence.  I like to keep everything simple and move towards a more complex mature set of measurements as the easiest metrics to implement provide early value to the project.

"Evidence-Based Management is a framework organizations can use to help them measure, manage, and increase the value they derive from their product delivery."  Scrum.org

Examples Types of metrics you may want to measure:
  1. Velocity
  2. Sprint Burn downs
  3. Versioning - versions of product or bugs requiring multiple fixes
  4. Code Coverage & Code Analysis
  5. Missed bugs/escaped bugs - bugs missed in a release
  6. Issues/Failed Deployments
  7. Team happiness - capture metrics in Sprint Retrospective from the team
  8. Financial measure - cost to delivery
  9. Average User Story completion time per team (identify outliers and average for each team.  Do from started status to completed status).
  10. Average story points per team, factoring in the number of team members over multiple sprints (don't compare teams as their unit size is not consistent across teams; if you do try standardize, the teams will give more points to fake outperforming other teams).
Help management nudge the teams and company in the right directions.  To get these metrics, need to explicitly determine what the goals are, so can figure out KPI and we are measuring this right things.  Ensure the management goals are SMART (Specific: Well defined, clear, and unambiguous) or use OKR framework to define goals to figure out the metrics to collect.  Ensure the metrics are important to the goals of the company or team.   Consider using
"Objectives and key results (OKR) is a goal-setting framework that helps organizations define goals — or objectives — and then track the outcome."  cio.com

Common KPI's:  
  1. Escaped Defects
  2. Defects in the last 14 days
  3. In Progress Features and User Stories per team
  4. Team velocity
  5. Stories completed vs stories committed
  6. Tech debt (know issue, items skipped)
  7. Team burndown charts
A couple of columns and dashboard widgets for summarizing progress in DevOps:
  1. Progress by User Story Total minus remaining rolling up to features.
  2. Progress by Work Items in a feature - get user stories, tasks, spikes that make up the feature and shows % of artefacts completed within the feature.
  3. Epic Progress - each epic show the number of features and number of feature completed e.g. 3 of 11 feature completed.
  4. Number of "In Progress" feature or User stories per scrum team - very useful to check a team is using and updating artefacts proactively.
  5. Priority committed and done using priority or WSJF.
Three goals of KPI's in Scrum:
  1. Measure deliverables
  2. Measure team ability to deliver business value
  3. Measure the scrum team itself

Wednesday 16 October 2019

Troubleshooting Box.com programmatic access

This post is loosely connected to:
Using Box.com Programatically (JWT)
Using Box.com Programatically (Part2)

Troubleshooting:
Box has a brilliant Connectivity Test Tool

ErrorSystem Error: System.AggregateException
System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) as System.Threading.Tasks.Task'1.GetResultCore(Boolean waitCompletionNotification)
Initial Hypothesis:  As my Box programmatic access was working on 1 machine but not on a VM at my client i suspected it was a difference in the environment or network connectivity.
Resolution:  I added System.Threading dlls, it did not fix the issue as i assumed it was GAC on my machine but on on the server.  No effect.
Resolution:  Check traffic using Fiddler and could see URL were not reachable/returning data.  Use the Box.com Connectivity Test tool and I could see multiple outbound https/443 URL were being blocked by the corporate firewalls.

Error:  Error Message: System.AggregateException
 Stact Trace:    at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
   at System.Threading.Tasks.Task`1.get_Result()
Initial HypothesisThis happened after the networking was corrected and from a machine on the Internet, so the issue is not networking.  I was not pushing the config.json file out to my bin directory, so the upload of a file was using an expired JWT.
Resolution:  The message did not give me a good inclining into the issue.  My output directory e.g. /bin folder was not getting updated with the config.json file.  I copied the correct JWT/config.json file and the error is resolved.

ErrorError Message : System.IO.IOException : based64 data appears to be truncated
Resolution:  Check the config.json file, my file got corrupted.

Error: Error Message : invalid_client
The client credentials are invalid.
Resolution:  The config.json was to a trial tenant that I previously has setup, get the correct config.json file.

Error: Error Message: invalid_grant
 Stact Trace: Signature verification error. The public key identified by "kid" must correspond to the private key used for signing.
Resolution: The Box admin team had revoked the JWT, a new config.json file was generated and fixed the issue.