Monday, 26 October 2020

Identity Server - OAuth and OIDC

Overview:  The current version of Identity Server is 4.  Identity server is basically a .NET Core 3.1 application that is an Identify Provider (IdP) similar in role to PingId, SiteMinder, AAD b2C.  Identity server allows application (native mobile, web sites and servers) to securely authenticate users.

OAuth2 Grant Types:

Flow Description Client Grant Type
Authorization with PK Authorization Code Grant Type.  Default choice for authorization. Native mobile Apps, Windows app, Browser Apps Code
Client Credential Server-to-server (S2S) communication also refereed to as Machine-to-machine (M2M). Server,Consoles,Services ClientCredentials
Implicit Rather use the Authorization Code Flow with PKCE Native Apps & SPA's often use Implicit Flow Implicit
Hybrid
Device
Resource Owner Pswd

Scopes: The authorisation Server specifies the "scope" that the user has consented too.  So for an API you can limit the actions the user can perform.  Always name your scopes by the API and the Verb e.g. "pauls_api.read" is better than "read".



Monday, 19 October 2020

APIM High Availability and Performance across Regions

Overview:  APIM can be setup in multiple regions and incoming request will be routed to the closest APIM endpoint.  If there is only 1 APIM region, it is best to ensure the API/App Service/Function is hosted in the same region.  With multiple APIM's you can also host a API in the same region.  The routing is either done automatically using Azure Front Door or via policy on the APIM.

More Info






Friday, 9 October 2020

App Insights - Website and API Monitoring

Overview:  App Insights has functionality to run scheduled web requests and log the output in App Insights.  There are multiple advantages to this including end to end active monitoring of web sites and API's, and keeping the application warm.

Below I show a simple request to my blog (public website) and the results, Azure refers to this test type as a URL Ping test which is basically a URL HTTP GET request.  


Wait a few minutes and Refresh to see the results:

Very easy way to include a constant check that your API or Website is running.  There is also the options to create "Multi-step web test" using Visual Studio.  You can record the authentication and assert for known response content to build advanced constant monitoring.

Tip: The URL does need to be publicly available.

The content I used to test out the functionality comes from the Microsoft Docs site.
Also see Live Metric Stream that is part of App Insights.

More Info: 
App Insights MultiStep Tests  Replacement Option for MultiStep Test based on Azure Functions

Thursday, 1 October 2020

App Insights - Basic Introduction

OverviewAzure App Insights is a great platform for collecting logs and monitoring cloud based applications on Azure.  All Azure Services can push logging information into App Insight instances.  This can be errors, usages, perferformances logging that in turn is easy to query.  There are SDKs for developers that can be used to add custom logging to applications.

Retension:  App Insights can keep 730 days worth of logs.  For long term storage, "Continious Export" can be used to push data into storage accounts as soon as it arrives in AppInsigthts.  Retaining the App Insight logs for 90 days has no additional cost, so the default to store logs should be set to 90 days at least in most situations.

What is logged and what can be logged:  
  • All Azure Services can be configured to send service logs to a specific App Insight instance.
  • Instrument packages can be added to services to capture logs such as IIS, or background services.  You can pull in telemetry from infrastructure into App insights e.g. Docker logs, system events.
  • Custom code can also call the App Insight instance to add logging and hook into exceptions handling.  There are .NET, Node.JS, Phyton and other SDK's that should e used to add logging, exception capturing, performance and usage statistics.

App Insights has a REST API to query the logs.  The "API Explorer" tool is awesome for querying App Insights online.  


The data below comes from Microsoft Docs.

"What kinds of data are collected?

The main categories are:

  • Web server telemetry - HTTP requests. Uri, time taken to process the request, response code, client IP address. Session id.
  • Web pages - Page, user and session counts. Page load times. Exceptions. Ajax calls.
  • Performance counters - Memory, CPU, IO, Network occupancy.
  • Client and server context - OS, locale, device type, browser, screen resolution.
  • Exceptions and crashes - stack dumps, build id, CPU type.
  • Dependencies - calls to external services such as REST, SQL, AJAX. URI or connection string, duration, success, command.
  • Availability tests - duration of test and steps, responses.
  • Trace logs and custom telemetry - anything you code into your logs or telemetry."

Azure Dashboards


Tuesday, 29 September 2020

Secure APIM using AAD B2C

Overview:  I have never connected AAD B2C to APIM myself, other on my project teams havde done it so I went thru it and it was super easy.

Followed the instructions: 

https://docs.microsoft.com/en-us/azure/active-directory-b2c/secure-api-management?tabs=applications-legacy

Postman to validate:


PB APIM Series:


Sunday, 13 September 2020

Building better Software Thoughts

Overview:  I see a lot of development teams, and they always seem to have areas they are good at and capabilities teams need improvement on.  Key is culture and building a happy team where team members trust and help one another.

Building a culture where teams enjoy code reviews is also key for successful Software projects.   To improve software, reviewing various areas not only code reviews are essential.  For me, clear requirements are the number 1 factor in improving teams performance.  

Companies are getting better at building software; I aim to work on these topics to improve the delivery of software within scrum teams:

  1. Code Reviews & Peer Reviews (Daily reviews are awesome, should be pretty short and enjoyable not someone trying to show off or hours long)
  2. Collaboration (Standups, Slack/Teams, Code tools have collaboration built in)
  3. Documentation & Requirements Reviews
  4. Better tooling including better CI/CD tooling including static analysis tools
  5. Unit Testing, automate coding standards, Integration testing, UI Testing, and API testing 
  6. Requirements (Use Stories are clear and Acceptance Criteria)
  7. Cadence is improving thanks predominately to Agile practices; I like short release cycles (2-3 weeks depending on the team and industry).  Changing requirements, indecision kills software projects.  Agile helps, but decisive knowledgeable product owners increase the likelihood of the project succeeding.

Benefits of Code, Documentation and Requirement Reviews:

  1. Improved software quality & product delivery
  2. Share domain knowledge
  3. Training team members (useful for onboarding new team members)
  4. Reduce support and fix costs
  5. Lower cost & faster development

Options Layering API's on Data Sources - Micrososervices kind of

Hasura takes data sources such as SQL, Postgress & MySQL and converts it into GraphQL API's.  SQL Server is in preview.  Service is available on Azure and hooks into AAD and AAD B2C.  Hasuru looks extremely interesting and useful.  Potentially a great time saver.

CDS/DataFelx/Oakdale - Allows for Entity creation and provides REST API's.

SharePoint lists provide HTTP API's for CRUD operations.

REST API's vs GraphQL

OpenAPI specification (previously known as the Swagger specification) is my default for an API, this allows for a known RESTful API that anyone with access can use.   Open API has set contracts that returned defined objects which is great, you can work with the API like a database with simple CRUD operations as defined by the specification.  The issue is that the returned objects are fixed in structure so you may need 2 or more queries to get the data you are looking for.  Alternatively, GrapghQL allows the developer to ask for the data exactly as the want it.
Open API example:
/api/user/{2} returns the user object  // Get the user object for user 2
/api/users/{2}/orders/10  // Returns the last 10 orders for the user
GraphQL example:
Post a single HTTP request.
query {
 User(id: "") {
    name
    email
    orders(last: 10 {
      orderid
      totalamount
      datemodified
    }
 }

Sunday, 6 September 2020

Working with CDS data structures for non CRM types

Overview:  I am working on a Power Platform solution and I need to use CDS.  Basically, I need to be able to see and edit values within CDS.

Option 1: Microsoft SQL Server Management Studio (SSMS) version 18.6 allows connectivity and read-only access.  Here are the instructions.

Option 2XrmToolBox has fantastic tools for Dynamics and Power Apps.  There are a lot of individual tools from various contributors.

Here I am using "Entity Relationship Diagram Creator" to look at the relationships between the CDS entities.




Saturday, 5 September 2020

Reducing Power Apps Dynamic calls and where to store Power Apps data.

Overview:  Power Apps is driven by data and generally that data comes from Connectors.  So the great news is there are a lot of different connectors and if in trouble I always find the custom connector can be relied upon.  When working with Power Apps, it is not as simple as just having a data source and consuming it, one needs to consider all the data sources, do we need live data, performance.  Basically, going and dynamically pulling lots of dynamic sets of data repeatedly leads to poor performance.

Identify data sources:  CDS, Azure SQL, SQL on-prem, CosmosDB, RSS, Open API's...

Understand the security, and the amount of data being pulled.  For example, if we need all the airport codes in the world for a drop down so the user can choose their closest airport e.g. JFK is for New York John F Kennedy airport.  There are roughly 4000 commercial airports in the world. 

Options: Call an Open API service.  Power App by default returns sets of 500, Power Apps max return count is 2000.  You still need to perform 2 calls with paging to get the full data set.  You could use a type ahead if the API supports it, but their will be a lag after each keystroke when Power Apps runs out to the service.  And there will be a lot of calls.  More suitable would be to do 2 calls with 2,000 record for each call and bind the control to the returned data.

A further improvement would be to store the airport lookup on data load or on first request, then subsequent requests would use the table/collection.  In effect, you are locally caching all the airport codes using 2 calls for each Power Apps user session.

For the airport example, one could also store the data using a Excel import, but beware the data is imported into Power Apps and store locally.  Big issue is the data is static in Power Apps, to update it, you need to re-import the Excel table.  So brilliant for sat days of the week or Months of the year as these never change.  Fairly static data like airport codes work well, but require a publisher level overwrite to update the list.  Also, storing extreme amounts of static data leads to bloat of the app and that data still needs to be loaded.  So I would not consider it for 100k+ items as a general rule.  

More Info:

Todd Baginski has a great video on using Excel import and creating language variations/multi-lingual Power Apps using Excel imports.

Alagunila Meganathan on C# Corner has a good post on Excel Imports for Power Apps.


Monday, 24 August 2020

AWS vs Azure services offering comparison for Solution Architects

Overview: Microsoft provides a useful list that allows me to know AWS services aligned to Azure Services.  This is pretty useful if you know 1 platform considerably better than another to quickly figure out your options on either AWS or Azure.

My Service comparison notes:

Amazon Relational Database Service (RDS) – SQL Server, Oracle, MySQL, PostGress and Aurora (Amazon’s proprietary database).  

Azure SQL lines up with Amazon's RDS SQL Server Service.  Although Aurora is probably also worth the comparison as it's AWS's native DB option.

Amazon DynomoDB – Same as CosmosDB – NoSQL database.

Amazon Redshift is the data warehouse.  Can be encrypted and isolate.  Support Petabytes of data.

Amazon ElastiCache run Redis cache and MemCached (simple cache).

AWS Lamda – Azure Fundtions.  Serverless.

AWS Elastic Beanstalk – Platform for deploying and scaling web apps & Services.  Same as Azure App services.

Amazon SNS – Pub/Sub model – Azure Event Grid.

Amazon SQS – Message queue.  Same as Azure Storage Queues and Azure Service Bus.

Amazon Step Functions – Workflow. Same as logic apps

AWS Snowball – Same as Azure Box.  Physically copy and transport to data centre for upload.

Virtual Private Cloud (VPC) – Azure virtual network 

Tip: I am glad that I did the AWS Certified Cloud Practitioner exam as it helped my understand of the AWS offering which has been very useful in large integration projects.  I have worked with AWS IaaS (EC2, API gateway and S3 historically).  Like Azure, there are a lot of Services and features.  Basically, there are equivilant services for Azure & AWS.  It may be a 2-to-1 service offering or it is not something offered by the cloud provider.

Sunday, 23 August 2020

AWS vs Azure vs GCP Comparison

Overview:  I predominately use Azure & Microsoft for all my cloud services.  


I have installed multiple SharePoint farms and setups on AWS EC2 instances and I'm currently preparing for the Cloud Practitioner AWS exam.  I have used Google for authentication, SaaS nut not as a IaaS offering.  I'm also a huge fan of Heroku which is great for PaaS and I used this to host my game built for Facebook games.  I've also seen IBM's cloud offering a few years ago.  For me it is too niche and not as feature rich.  So basically I understand Azure's offering well so I found this comparison pretty useful.

My Thoughts:  The contenders:  I really like Heroku for it's simplicity.  I feel for a small Indie developer or company, Heroku has a good free and cheap simple billing options.  GCP, I really can't comment from a good position of knowledge but from what I've used, I like GCP.  IBM's offering, well if you are a partner, you cloud go with this option but it is aimed more a large business partners.  IBM's cloud is IaaS focused, with some PaaS offerings but once again I'm not an expert.

AWS, has always been really easy to use.  It is big and complex like Azure with many offerings.  Basically, I'd choose AWS if the organisation was already using it and the people in the org know have experience with AWS.

Azure, so in my world Azure and O365 feel like the dominant player but the diagram below provides a great insight into the relative size of the Cloud infrastructure market.  Azure SaaS offering O365/M365 is also huge and hosted on Azure.  

There is good resource CloudWars.co that goes into looking at the various cloud providers.  My current take away is Amazon is the biggest player in the IaaS field.  Azure has IaaS, a large PaaS offering and a massive SaaS (including Dynamics and O365) offering (Amazon has not equivalent).  

Saturday, 1 August 2020

Possible Technical Roadmap and thoughts on a startup

Overview:  A friend of mine's son has recently built a web site, and it looks impressive, and we started discussing his project, this turned into a detailed technical conversation to everyone's horror at a BBQ.  This chap is 14, and all I can say is wow.  All hosted and built without spending any money.  I interview many developers, and his knowledge is outstanding.  

I've drawn his architectural explanation below, added a few items to check and what my next piece would be.

Clarification of High Level Design
Thoughts:
  1. Build the Native mobile apps, getting users buy-in with a Native App will be considerably higher.  As you have used React, use React Native (React is separate to React Native, a completely new codebase).
  2. Cordova/PhoneGap is a wrapper that would allow you to keep a single code base and merely inject the existing code into Native wrappers for iPhone and Android.  As your app is HTML 5, and already looks like a modern mobile app, I'd use PhoneGap.  At least try it out and see if it fits.  You'd then only have to deal with a single code base to update all the platforms.  You could always use Google's Flutter if you wanted a single code base for the web site and Mobile native apps. 
  3. Ensure your API's are OpenAPI/Swagger
  4. Security - the API already has some protection.  Ensure the database is protected/secured.
Revenue model:  Ads (Gambling Ads pay well). Make a premium-paid for service (keep end-user usage/sign-up free).

Thursday, 23 July 2020

Shopify - Add optional installation on the cart for shoppers

Problem:  On my Shopify shopping cart, if a buyer is checking out and they have plants/flowers in their cart, I need to offer them the ability to have someone plant for them.

This could have various options such as in furniture, you could offer an assembly service.

Initial Hypothesis: Shopify use Liquid as it's scripting language.  Liquid allows us to combine Liquid with HTML, CSS and JavaScript to get the desired page behavior.  Below is my User Story:

As a shopper I want to be able to add labor to allow me to have my flowers/plants installed/planted so that I know an expert has got my flowers in correctly.

I also need to add the appropriate amount of labor, so if less than 3 plants charge for 1 hr  otherwise charge for 2 hrs.  Shopify has variants allowing me to have a labor product for 1 or 2 hrs.

Resolution:  Amend the cart.liquid cart summary page to allow for Labor to plant for the Shopper to easily be added.

Desired Behavior: 
Add a button to add installation

The optional installation cost for 1 hr is added to the store
Steps to Implement: 
1. Create a new Product in Shopify "Plant for me please".  Add it to a unique collection, mine was "Last Minute Checkout Items".  Add a couple of variants for time/cost as shown below:
2. Go to the Product page and append .xml to the end of the page e.g., https://nurserynearby.co.za/collections/last-minute-checkout-items/products/would-you-like-someone-to-plant-your-plants.xml and get the Variant_Id's, we need these to add the correct amount of labor cost later.
3. Open the cart.liquid file and add the  following logic:




Shopify thoughts

Overview:  Recently, I had two requests for some work around shopping carts/auctions.  I dismissed the first request as it is not what I specialize in.  On the second request, I decided to take the project on at a hugely discounted rate as I felt I did not have expertise in the field.

Shopify: Shopify is an amazing product and eco system, and I have build a great shop on the SaaS Shopify platform in record time.  There are great plug-ins to add functionality such as gift registry's, and Mobile add integration.  I have now been playing with the product, deliver solutions, worked with app vendors, reviewed competitor's, had help from Shopify support and I can honestly say that it is awesome.


Program-ability:

  • There are simple API's that are well documents.  
  • The internal language is Liquid which is basically a UI scripting language.
  • Shopify SaaS has add ins and templates, so look before developing
Liquid:


In the code snippet above I am building custom functionality int he cart checkout to add specific products to allow more sales on the store depending on custom rules provided by the customer.

Mobile:
The templates OOTB or that shop admins can purchase follow responsive design as one would expect.  There are add-ons for iOS and Droid to make native apps, very well priced with good functionality changed on a monthly reoccurring basis (circ $40-100/month).  I need a custom mobile app for both platforms so I'm between choosing to build with Flutter or Blazor.


Friday, 10 July 2020

Power Apps Tracing to App Insights Not Working in edit mode

Overview: Power Apps integrates to directly log to App insights.  This post looks at the issues around Tracing in App insights.

Setup & Verify:
App Insights instance Instrument Key Required, Config you Power App to Trace to App Insights, and create a button to test.
The Monitor Tool is fantastic for tracing all outbound traffic, you no longer need to go to the service and check if Power Apps is reaching.  For example I use to have to look at the APIM Azure App logs for custom Connectors.
I can see all my interactions in Power Apps and travelling out.  This is a massive win for Power Apps so giving Tracing from the front end.

Greg Lindhorst, wrote a post on using the Power Apps Monitor Tool for debugging and performance improvement.

Problem: It normally takes a few minutes (like 2 minutes) to show up in App Insights, it is not showing up after 15 minutes.  Let's see, I've drop the Power Apps team a message and i think it's a bug that has crept in recently.  Today is 10 July 2020.  I've also notice that none of my Session page tracing is showing up in App Insights.

Update: 11 July 2020, the Power Apps behaviour has been on my mind.  I'm in edit mode, and when I publish the app and use it, App Insights logs perfectly.  When in edit mode running the app, even direct traces, no logging into Power Apps.  I did not realise this, but it kind of makes sense.

Resolution: Publish & Run the app, and the Tracing and Power App session tracing shows up in App Insights.

Only when I Run the Published App to my Page Views and Custom Traces get logged in App Insights.

Tuesday, 30 June 2020

Multi-Geo for MS Teams

O365 offers multi-geo tenants to meet data residency rules for 13 countries and regions (as of 30 June 2020):
  1. Australia
  2. Asia Pacific 
  3. Canada
  4. European Union
  5. France
  6. India
  7. Japan
  8. Korea
  9. United Kingdom
  10. United States
  11. United Arab Emirates
  12. South Africa
  13. Switzerland.
Teams data resides in SharePoint Online, OneDrive for Business and Exchange Online.  With Multi-Geo enabled, a company can specify where data will reside.  There are 2 parts to multi-geo:
  • User specific data.  This data is stored in various satellite Geo for each user e.g. email, OneDrive
  • Company/Project/division specific data e.g. file shares, Document libraries
For more info on Multi-Geo on O365

Microsoft "Multi-Geo is currently available to Enterprise Agreement customers with a minimum of 250 Microsoft 365 Services subscriptions."

MS Teams Background Info:
https://www.pbeck.co.uk/2019/12/microsoft-teams-governance.html
https://www.pbeck.co.uk/2020/05/microsoft-teams-overview.html

Wednesday, 24 June 2020

Postman API Builder Intro

Overview: Tools for building and mocking API's.  Swagger/Open API - Great tooling and my original preferred choice.  APIM - Great tooling, part of Azure and easy to replace mocks as you go along with the live implementation.  Postman is offering a great set of functionality to rival Swagger and APIM.  This post looks at Postman's new functionality around building API's.

Postman API Builder:
Not only a test rig, it now offers the ability to build API's and mock:
  • Mock - so you can test supports key and OAuth authentication
  • Assert Tests - You can specify asserts in postman
  • Test suite - generate collections/Collection Runner - Allows a set of related tests to run sequentially.
]
  • Document the API
  • Monitor
  • Version control for changes e.g. GITHub
  • API Versions supported
  • Note: Free plan has all of this, limited on the number of API's but all the features are on the free plan.  The main notation formats are support including:  OpenAPI & GraphQL
Summary:
I like Swagger tooling, I have done a few projects find APIM fantastic for building API's quickly.  Postman historically was merely my test rig but looking at the functionality, Postman API Builder is a great option for designing and building API's.  Postman is a good tool for building into CI/CD pipelines to validate API's.

Thursday, 28 May 2020

Microsoft Teams Power Apps Integration

Overview: Teams are amazing, I was a complete Slack fan, but I'm 100% now a teams supporter.  It's part of O365, replaces Skype (which was great but only a chat app like zoom), you get your email, and can add all your apps and websites to your Team.

Adding your custom Power Apps to Teams:

Adding A Power App to MS Teams:




Notes:
  1. MS Teams uses the Chrome engine (Chromium) as it's browser.
  2. A feature I don't like about Teams is that when i switch focus to say a chat window and come back to my Power app within MS Teams, I loose my place in my power app and the app is loaded from scratch.
  3. I believe the problem of apps maintaining session state will be solved shortly with pop out Windows in Teams around July/Aug 2020.

Friday, 8 May 2020

cURL for Windows 10 & Azure Cognitive Service Primer



In this example I am using Azure Cognitive service to provide a jpeg using curl on my Windows 10 Surface laptop.

Sunday, 3 May 2020

Common Software Architectural Patterns

3/N Tier Architecture/Layered:
1) Presentation/UI layer
2) Business Logic
3) Data Layer/Data source
Here are a couple of possible example over the years you could of used
ASP > C++ Com > SQL Server 2000
ASP.NET (Web Forms) > C# Web Service (XML/SOAP) > SQL Server 2008
ASP.NET C# > C# Business Object Layer > SQL Server 2008
KO > MVC > SQL 2012
Angular 3 > C# Web API (swagger contract) > SQL 2016
REACT.JS > Node.JS > Amazon Redshift
UI > Azure Functions/Serverless > SQL Azure
Flutter > C# Web API .NET Core 3 (swagger/OpenAPI) published on Azure App Service > SQL Azure/Cosmos

Thoughts:  As time has progressed, scaling each of these layers has become easier.  For instance Azure SQL has replication and high Availability and scalability automatically built in.  No need to think about load balancing in depth.  Plug and play and ask for more if you need it.
Microsoft SQL Server use to be a single server, then came replication, clustering, Always-on-availability, scaling greatly improved performance.
Middle Tier or Business layer use to be a singleton pattern - go thru a single server for business logic, slowly load balancing improved and caching become better.  Nowadays merely ramp on on you cloud provider.

Sharded Architecture: Application is broken into many distinct units/shards.  Each shard lives in total isolation from the other shards.  Think SOA or micro-service architectures often use this approach.  For instance build a complete application to handle ordering and a separate system that handles inventory.  So both could be in different data stores so let's say orders are on CosmosDB and Inventory is on Azure SQL.  Some of inventory data is static in nature so I decide to use App Caching (Redis).  Both the data sources site on independent server-less infrastructure.  So if you see inventory has an issue, merely scale it.  The front end store would seamless connect to both the separate.  "Sharding" databases/horizontal partitioning is a similar concept but only at the database level.  Sharding can be highly scaleable, allow for leveraging and reusing existing services, can be flexible as it grows.  Watch out for 2 Phase Commit (2PC/Segas/Distribute transactions)
Thoughts Pros: 
Great to reuse existing services instead of creating yourself. e.g. App Insights on Azure.
Great for high availability.
Cons:
Increased latency - you may need to go to various systems in sequential order.
Need keys to manage e.g. clientId for this decouple architecture type, this architecture can also become complex especially if you need to expand a shard to do something it doesn't do today.
Data aggregation and ETL can become complex and have time delays.

Event-driven architecture: Only run when an event happens.  They are loosely coupled.  In Azure it generally covers: Functions, Logic Apps, Event Grid (event broker) and APIM.  Easy to connect using Power Platform Connectors.

Hexagonal Architecture,
Command Query Responsibility Segregation (CQRS) - pattern/method for querying and inserting data are different./seperated.  This is a performance and scaling pattern.
Domain Driven Design (DDD) - Design software inline with business requirements.  The stucture and language of the code must match the business domain.  DDD Diagrams help create a share understanding of the problem space/domain to aid with conversation and further understanding within the team. 
Event Sourcing Pattern or AMQP
Competing Consumer Pattern – Multiple consumers are ready to process messages off the queue.
Priority Queue pattern -Messages have a priority and are ordered for processing based on priority.
Queue-based load leveling
Throttling pattern
Retry pattern
Circuit breaker pattern
The Twelve-Factor App methodology is a methodology for building software-as-a-service (SaaS) applications.

Streaming/MessageBus: Kafka, IoT,
Azure Messaging Service is made of of 6 products:
1. Service Bus - Normal ESB.  Messages are put into the queue and 1 or more apps can dirrectly connect or subscripbe to topics.
2. Relay Service - Useful for SOA when you have infra on prem.  Exposes cloud based endpoints to your on-prem. data sources.
3. Event Grid - HTTP event routing for real time notifications.
4. Event Hub - IoT ingestion, highly scalable.
5. Storage Queues - point-to-point messaging, very cheap and simple but very little functionality.
6. Notification Hub - 

Azure Durable Functions - Azure Functions are easy to create logic but are not good at long running or varying length duration functions.  To get around the timeout limits there are a couple of patterns for Functions making them better at handling long running operations.  The most common patterns are: Asyn HTTP API's (Trigger a a function using HTTP, set off other functions and the client waits for an answer by polling a separate function for the result), Function Chaining (Execute functions sequentially once the last function completes), and Fan out/Fan-in (first function call multiple functions that run in parallel) 

Lambda: great for large data architectures.  Has a batch vs streaming concept.  Each transaction pushed into a queue/stream (Kafka/Azure Queues/Azure Event Grid) and large data can be stored for later batch processing.

"Onion Architecture is based on the inversion of control principle. Onion Architecture is comprised of multiple concentric layers interfacing each other towards the core that represents the domain. The architecture does not depend on the data layer as in classic multi-tier architectures, but on the actual domain models." Codeguru.com

OpenAPI vs GraphQL
OpenAPI specification (previously known as the Swagger specification) is my default for an API, this allows for a known RESTful API that anyone with access can use.   Open API has set contracts that returned defined objects which is great, you can work with the API like a database with simple CRUD operations as defined by the specification.  The issue is that the returned objects are fixed in structure so you may need 2 or more queries to get the data you are looking for.  Alternatively, GrapghQL allows the developer to ask for the data exactly as the want it.
Open API example:
/api/user/{2} returns the user object  // Get the user object for user 2
/api/users/{2}/orders/10  // Returns the last 10 orders for the user
GraphQL example:
Post a single HTTP request.
query {
 User(id: "") {
    name
    email
    orders(last: 10 {
      orderid
      totalamount
      datemodified
    }
 }
You can see that for complex changing systems, GraphQL is potentially a better choice.  I also like the idea of using HASURA for ORM using GraphQL against PostgreSQL (hopefully SQL Server and others).



Thursday, 30 April 2020

AAD Conditional Access

What is Conditional Access on AAD: Microsoft AAD with conditional access allows for users or groups to verify themselves more securely as after the login attempt an additional check is required to identify if the account may be compromised/at risk or is good.  Microsoft use algorithms and a ton of collated information to determine the risk on the attempted login.  A simple example would be a users location is unusual or logging in from different places in the world in too short a period.

  • First factor Authentication happens before conditional access. 
  • Setting up conditional AAD access 
  • Conditional Access is part of Azure MFA
  • Configure conditions for access
  • Easy to bypass MFA if a used is a ADFS federated user or coming from a specific IP range (head office location) or region.  Can also allow a one time bypass if a user loses there phone.
  • Required Azure AD Premium licences

Monday, 27 April 2020

Azure DevOps/TFS Basics


Overview:  There is a lot you can do with Azure DevOps to monitor your projects.  A couple of simple charts can be used to motivate (or demotivate) your team.  Start simple and build...









Sunday, 19 April 2020

Knowledge Transfer/Support Handover

Problem:  Projects that I tend to work on are complete by Scrum teams filled with specialist and specialist contractors who move on after project completion.  Support is generally handled by dedicate people/teams offshore.

Hypothesis: Having high quality support people working alongside you throughout the project is not very common due to costs.  I believe there are key points to cover to ensure that the operational support is effective.  Too many companies merely focus on checklists and the ops team don't get a fundamental understanding of the system.

Resolution:
1. People/Support: Understand the domain - Hard
2. People/Support: Understand the architecture - Easy
3. People/Support: Understand who is responsible for level 1- level 3 support and what that entails.  Easy if done correctly.
4. People/attitude: Hire patient collaborative, eager people in support (most key point) that want to learn and take ownership. Easy if done correctly.
5. Knowledge base - have a wiki or equivalent.  The same issues always present, so document and have an answer that can help your uses.  I also like to record mp4's for different levels of support.  Record the sessions as it is too easy for level 3 people to say they never got a handover or covered something.  This allows people to look back, easily train additional users.  Easy if done correctly.
6. Ensure you have automated tests, they are a great source of how your system works.  And if a fix has to be released, it also easy to validate that the original logic still works.  Hard but it returns great benefits if used.

Sunday, 22 March 2020

My Solution Documentation Thoughts

It all depends on the project but this post outlines what I have found to be the best practices for documentation on projects. 

Documentation should not be an after thought but done effectively throughout the development of any project.  It helps clarify thoughts, communicate and should save time.  Documentation is generally poor as it is dumped on people that tend to write it from the wrong point of view.  For example, developers know the products or components but write the code from their point of view not necessarily effective to the enterprises understanding.

Documentation Should Cover

  • Overview & Startup Documentation - Get the team with a common understanding.
  • Architectural Design Decisions (ADD) - Get the technical people on the team with a common understanding.  Software Design Document (SDD)/architecture design document  - Description /overview.  High Level Design (HLD) & Low Level Design (LLD).  Architectural design decisions are stored in a Architectural Design Repository (can be a simple as a file server, I prefer SharePoint and a Wiki index)
  • Requirements - User Stories/Use Cases.  Get good clear requirements from the business.
  • Code Documentation - Code comments & API Documentation/Swagger
  • Performance And Testing
  • User/System Documentation - User Guides and knowledge bases. Reduce escalation or time to get end users working.  Support documentation, I use Wiki's, they are easy to use, update, once a problem is solved, it is easy to add a new wiki and all future support is much easier.  Wiki's are quick and easy and should be kept current, don't hold old decisions.  Wiki's are searchable and tag-gable.

Tip: I record a lot of decision and support using Snagit.  It's fast, brilliant for knowledge bases and end user training.  Considerably less effort than written documentation.
Note:  A lot of specific documentation is needed for legal and complaint/regulation, this can be pretty heavy but still best to understand the requirements and do it from day 1.
Thought: Technical Writer (can be a dev, BA, technical architect or a dedicate technical writer) - I believe the BA should also be the test lead on non-scaled Agile products.  They understand the requirement, therefore are best to understand the testing and write clear concise documentation in the form of test cases or acceptance criteria and user stories.
Tip: Use Grammarly and do documentation professionally.  Ensure your documentation is easy to follow, do not have spelling mistakes or grammar issues.  Lastly, consistent layout between different documentation writers must be consisted be this in code comments for full end user documentation.
Thought:  Write in present tense in an active voice, if forces people to look at the now and future.
Note: Companies have guidance and documents, ensure you know the format of documents and comply with company guidelines, this may be as simple as fonts and colours in your documentation to specific document formats such as TOGAF documentation standards.  Make it easy for your project with a little planning.
Thought:  Code comments - Naming should do most of the documentation, but complex logic or implementation decisions should be commented using the KISS principles.  Don't document exacly what the code says e.g. If (status=21)  // Apply logic if status is 21 // Rather us // Update the Customer Web Service if the users email address has change
Comments should not be used to delete code in case the developer needs it.  You have source control, delete the code.

Agile Documentation: Does not mean no or low documentation.  Agile documentation should be clean, concise and save time overall for the team members.  Essential documentation, don't over document or items that are obvious.  Prioritize documentation like we do in backlog evaluation.

Slack/Teams/Email:
I was a Slack evangelist, it is awesome for Agile projects especially for projects with people in different locations.  Well now I am a Teams guy.  It's awesome, simple and let's you remove so many dependencies.  If you haven't used it before and you have office 365, it's a "no brainer".  In 2 weeks everyone will love using teams.  I have had many dysfunctional teams that needed coaching, teams that document everything and in stand-ups you hear "I sent you that in an email".  The first thing I tell these teams is "email is not a defense", go tell or speak to the person.  These teams are To and CC nearly all there email.  I immediately enforce the rule To: means i want a reply CC means it's important to you.  If someone then sends and email that is CC'ed, I ask them why and they generally learn to use email conservatively.  I stopped a team several years back using email for 2 sprints to get them communication and trusting each other again.

Sunday, 8 March 2020

Handling Security Incidents

Security Incident: An incident that potentially has compromised a companies systems or data.

Goal:  Focus on restoring confidentiality of systems/data and prevent further attack.  Contain the incident and eradicate the issue.  Full resolution target timeline is met for incidents.  These incidents can take up to 100 days but depends on the complexity.  

Examples:  Virus, Trojan Horse, Stolen data, increased unauthorized permissions, compromised server, copying data, DoS, unauthorized system access, ....

Need to record each event and work through the life-cycle (ISO 27035).  Can be dedicated software or modules such as ServiceNow's Security Incident Response (SIR).

  1. Plan & Prepare
  2. Detection
  3. Assessment and Decision - Get logs, review/analyse, document the findings, notify leadership teams.  Impact/Priority e.g. Critical vs Low business impact.
  4. Response - limit damage plan, decide on approach, notify if needed and remediate.
  5. Lessons Learnt - ensure the threat is removed and potential lessons can help improve the attach surface for similar issues.

https://en.wikipedia.org/wiki/Computer_security_incident_management

Note: Be careful not to delete forensic evidence.

Tip: Organisations must have a Security Incident Plan.  Plan, be ready, know what to do in advance improves the handling of Security incident.