Monday 30 December 2019

Power Apps Licencing

Overview:  Licensing has change a fair amount over the past 3 years for Power Apps and Flow.  This post summaries my understanding of licencing on the Power Platform as of Dec 2019.

Update 4 Nov 2021: Power Apps licencing was halved in price in 2020, a few days ago at the Microsoft ignite conference, a 3rd pricing option for Power Apps, "Pay-as-you-go" is an option.  The other two options are "Per User" or "Per App" plans mention below.

Types of Licences:
1. O365 and D365 licencing: Basic PowerApps licencing comes with these plans but you only get a small subset of connectors (missing Azure SQL and CDS Connectors).
2. Per-User or Per-App Plans: There are no more PowerApps P1 and P2 plans, you either need to use "per-user" or "per-app" Power Apps licencing plans.
3. Community Plan:  Allows a developer/creator the full set of functionality, free of change but the apps can't be shared.

"To run standalone apps, guest users need the same license as users in your tenant."  Microsoft Docs.  
  • Power Apps uses AAD B2B (both members and guest) 
  • Using standalone Power Apps shall require the Power Apps licence (not necessarily Portal Apps).  Steps to Share Power Apps with Guests
  • SharePoint user interacting with a Power Apps form on a SharePoint list do not require a Power Apps licence as you already have O365.
Updated: 29 May 2020 
Problem:  I cannot import Power App solution (zip) files into other DTAP Power App Environments.  The "Import" button is disabled.
Initial Hypothesis:  My trial per User Power App licence expired and most functionality was still working with my O365 Power App built in licence.  Administrator could not get my old licence renewed or add a Power Apps per user per app licence.
Resolution:  The Ops team assigned me a Dynamics 365 licence, and Import functionality is working again.



Thursday 12 December 2019

PL-900 Microsoft Power Platform Fundamentals Beta Exam

Overview:  This morning, I took the PL-900 Beta exam; it is seriously tough.  I don't think I passed but let us see as the results come out 2 weeks after the beta is closed.

My Thoughts: The exam covers three products: Power BI, Power Apps & Power Automate (formerly called Microsoft Flow) but is extremely wide-ranging on relying Microsoft technology.  You really need to have worked in detail with the products as the questions were not straight forward and often combine multiple related services/products.

What I learnt:  You need to know the 3 products well, and how the interact.  My connectors knowledge could be better, CDS comes in a lot, and my Dynamics 365 knowledge is lacking.  There were some Cognitive service questions that I was not ready to deal with.  It is a good test of knowledge, that has helped me realize I have holes in my knowledge on the Power Platform.

The exam itself:  Microsoft exams become easier as you learn how they ask questions, I found some of the language and questions difficult to follow.   What is the question actually being asked but this is pretty standard in a lot of Microsoft certification exams.

I did the exam remotely using Pearson/VUE software, which is great.  No need to book and go to a test centre.  They check your environment and identification and watch you throughout.  I got a warning for reading aloud, which until that point was awesome as the wording of the questions is not the easiest to understand, but it makes sense as people could record the questions using audio for cheating with brain dumps.

Summary:
  • I am convinced that I failed PL-900.  
  • I learnt a lot preparing for the exam and it was a good experience.
  • I need to look at Dynamics & Cognitive Services in more detail.
  • Remote exams are awesome and secure the way they are done & don't talk to yourself during the exam.
  • I've missed taking Microsoft and other certifications.

Tuesday 3 December 2019

Microsoft Teams Governance

Update 2020/07/07:  I recently watch a presentation by Rupert Squires on MS Teams Governance that provides a good introduction into Team Governance and thoughts for your MS Teams projects: https://www.slideshare.net/RupertSquires/positive-governance-in-5-steps-for-microsoft-teams

Storage: MS Teams stores data in SharePoint, exchange and OneDrive for Business (OF4F).  Each team has it's own dedicate site per team.  Chats are stored in the senders OD4B, images stored in exchange.  The Teams will provision in the location where the tenant was setup.  In the O365 Admin centre, you can see where your instance of Microsoft Teams is located, mine is in the EU zone.


If location is important to you for compliance reasons, it's important you select the correct location.  I favour the EU zone as Europe is pretty strict on data, pretty central globally to my user bases.  But it does come down to your clients needs.

All data is encrypted at rest and encryption in transit and Teams data is stored in the Microsoft Data centre for your region

MS Teams Configuration: Team admin allows for a great degree of control in allowing different users different rights.  You can turn-off feature to groups of people easily to align with your companies governance.  There are a few dials to help you get granular control.

O365 Group Naming Policies is good for controlling access.  This allows for a common easy understanding of Groups, what the group has access to .

Sensitivity Labels/Azure Information Protection - very useful

Team Creation Thoughts - Should anyone be allowed to create your teams results in complex scenario that has be to governed and brought back under control.  Too much restriction results in people not using teams and potentially using shadow IT to achieve their goals.  Privacy must be appropriate, public vs private, don't allow people just to go for public because it sounds more open.  Each team should have a purpose, likely to have an end date (not always).  Don't just follow org structure.

Teams is not a replacement to all tools but it works best if you work out the right tool and often Teams can replace a variety of tools in large enterprises.

Teams should be using Teams instead of Email.  This is generally how one can tell if teams are being well implemented.  For me, I want to use teams for all communications and project related file store.  Email should be last resort.  Remove Skype if you have Teams.  Consider removing Zoom, WebEx, GoTo... you have teams, use this as the chat, calls or team meeting tool by default.  Schedule meetings from Teams if possible so it's in the correct team.  The meeting is therefore context based.
Don't allow people to add any app, think about it.  The exception is possibly small companies.

MS Teams ALM: Ensure that Teams are deleted when no longer needed.  ShareGate's Apricot tool is great for getting Teams under control.  Archive before deleting a MS team if you require the data or you have to be available.  Owners can delete a Team, the Team goes into the recycle bin for the default set period (30 days maybe).  It will be gone after this including the underlying storage data.

Note: When you delete a team all the underlying corresponding data is also deleted from SharePoint, Exchange and OneDrive.

MS Teams Series:
https://www.pbeck.co.uk/2019/12/microsoft-teams-governance.html
https://www.pbeck.co.uk/2020/05/microsoft-teams-overview.html
https://www.pbeck.co.uk/2020/06/multi-geo-for-ms-teams.html

Web Api hosted on Azure App Service with OIDC security using Azure AD B2C

Problem:  I want to add security to my .NET core ASP.NET Web API C# application using Azure AD B2C.

Terminology:
  • .NET Core - revision of the .NET framework.  Allows your application to run on Linux, Macs and Windows.  You do not need to have the .NET framework installed.   
  • ASP.NET Web API - Follows the MVC pattern using Controllers and Models to provide an HTTP services e.g. Give me the temp in Paris today.
  • Azure App Service  - Host an MCV or Web API on Azure.  Acts as a web server, it is scale-able and fully manged.
  • Azure Active Directory (AAD) B2C - AAD B2B is different to AAD B2C, totally separate services on Azure.  Business 2 Consumer (B2C) provides applications with an identity repository.  B2C provide authentication and identity management as a service to web applications and mobile applications.  Think of it as the same Google authentication but you own the identity provider instead of rely on third-party authentication providers like Google.
  • IdP - Indentity Provider, B2C is one of 2 AAD service for managing users/identities on Azure.
  • MVC - Model, View Controller is a pattern used to aggange software.  In this post I'm refering to project that utilise the MVC templates to create a project for web sites or Web API.



Problem: MVC web application hosted on a Web App, using Azure B2C, B2C holds users and also uses a social Identity Provider (IdP) namely Google.

Figure 1, Create a new project on the Google Developer Console

Figure 2, OAuth Consent Screen setup
Figure 3, Add the Credentials to Google
AAD B2C linkup to Google IdP.

High-Level Approach:
  1. Create your own Azure tenant & B2C service instance on Azure (using the Azure Portal)
  2. Register your ASP.NET Web application on the Azure tenant (using the Azure Portal)
  3. Create User Flows (Policies) on the B2C tenant (This allows you to create the flow to sign-in a user, create a new account, or a user to reset their password,...)
  4. Setup Google to connect to the B2C IdP (see figure 1-3)
  5. Update application created in Step 4 so that is is aware of the Google IdP
  6. Perform Authentication setup - create MCV web application using Visual Studio
Tip: This approach and technology is extremely well documented by Microsoft.


Monday 2 December 2019

Box.com to SharePoint migrations & O365 improvements

Box.com Tenant to SPO Tenant migrations:  Mover has been bought by MS and allows moving files from Box.com, drop box and others into SharePoint (it's going to be free).

  • Fast Track has a tool that can do Box.com migrations, but the Mover tool is a great option.  
  • Mover allows for incremental update, permissions are roughly copied.  
  • File versions are not copied, only the latest file.  
  • Look out for file size and file types in the source.  
  • Look out for file names, 0365 has restrictions on special chars, and Mover shall remove them from the destination file name e.g. *,/<>.:
  • Original Box.com timestamps are kept
  • I didn't need to move custom properties from Box into SharePoint metadata and I can't see this functionality (I did read the ProvenTeq do metadata migrations)
  • A scan of the source data and a spike/PoC is a good idea to know what limitation you shall face.
  • Mover is a good starting place for the technology to migrate from Box onto SharePoint Online.
  • ProvenTeq have a Box.com to SharePoint migration tool - I haven't used it.
  • MigrationWiz can migrate from Box.com to SharePoint (I've never used it).  
  • Use "Box embedded widget" to access Box content from inside SharePoint.
  • Custom code for migrations: Box.com and SharePoint REST have great API's but you'll need to manage the throttling, changes/updates and permissions yourself.  Use a tool unless there is no suitable tool.  
  • The same common sense approach as outlined a few years ago for Lotus Notes to SharePoint on-prem. migration is essential.
Key SharePoint online limitations have been increased namely:
  • 500k site collections per tenant is raised to 2 million SC/tenant
  • SharePoint Online & OD4F file upload max size increased to 100GB max size
Sensitivity Labels: AIP, DRP, Teams, unified labeling experience across O365.  Sensitivity labels will be added to document added using an O365 E5 licence for the import from Box.com. 

  • Sensitivity labels work with teams now!  
  • Private channel are now available.
  • Sensitivity labels therefore allow for governance and protecting your teams and team channels.

OD4B: Allows external upload only permissions.  Really useful as the number of large clients that still only support SFTP/SSH or worst FTP to share files.  Very quick low risk locked down way of getting files from clients and partners.

O365 tenant to tenant migrations: There seems to be a lot happening in this area but right not it's use the tools and Fast track.

Plus Addressing in Exchange Online: Exchange online offer Plus Address.  Allows you to signup to newsletters and you can identify if you email address is sold on.  e.g paul@radimaging.co.uk, i will get the email for paulnews365@radimaging.co.uk, if it is sold on, you know where news365.com have sold on my details.
https://office365.uservoice.com/forums/273493-office-365-admin/suggestions/18612754-support-for-dynamic-email-aliases-in-office-36


Friday 29 November 2019

Redis Cache

Caching in Microsoft technologies has many forms and options.  You can have server side cache, client side cache or downstream caching such as proxy servers.

Server Cache in .NET:
  • in-Memory store: This type of server cache historically was very useful for maintaining state and having users tied to a collection of objects/knowledge and is extremely fast.  You need to think about multiple instances, or how to route traffic.  So you can tie a user to use static routing to ensure user Joe's session always goes to server 3.  But is server 3 dies, you loose all the cache data.  
  • Local storage cache works well for larger amounts of data that are too big for memory cache but as they are storage based are much slower.
  • Shared/ centralized caching, allows shared cache, a good example is using SQL server for caching and the user can go to any front end web server and the cache is pulled from SQL.  Allows for failure, no need to performed fixed user routing of requests (sticky sessions).  It is slower as cache is pulled from a central service that in turn goes into SQL to retrieve the data.
  • Azure Redis cache is a form a of Shared Caching.  The cache is better if used in memory like Redis cache does.  A Redis Cache is a service that is optimized for performance and allows stateless caching to work across server instances.  So while it needs to travel over your network in Azure, it is not as fast as local cache but extremely fast for centralized caching.  Redis is pretty advanced and has clustering and replication to ensure performance is good.
https://docs.microsoft.com/en-us/azure/architecture/best-practices/caching

Client Side Cache:
  • Browser can hold information in session but it is not secure or at least less secure than server side cache.
  • CDN's are a way of retrieving data from an optimized central store but useful for static files and data.
  • Adding headers to HTTP requests allow for downstream caching.  For example, I offer a REST API (e.g. C# Web API) that return a feature product that change hourly.  I could easily set expiry caching to 10 min.  The product is changed every 10 minutes for each user and added to the users local cache.  So if an average user is on for 20 minutes, they only to 2 of their 10 requests to the REST API, the other 8 calls are served locally.  Couple with Server caching, the requests to the server can be fast, with very few calls to the back-end database yet the relatively static data causes far less demand on the web service.
  • Validation Caching - is client caching, that stores data locally, but a request is sent to the server to ask if there is a change, if there is a change send back the new data.  If the data has no changed, a 304 response is sent and the browser uses the previously stored local cached data.
Client and Server (Redis) side Caching on Azure
Note:  The diagram only outlines client caching referring to the browser or end device.  Using HTTP headers, you can also set down stream proxy caching.  This is also generally referred to as client caching.

Firefox has a nice short cut to check you client cache is working.  Open Firefox, and request the URL that shall cache a resource on the local client machine.  then in the URL type in "" and check what is being served up locally.


Basic Caching Options:
1. Client Cache (local) - saves on network call and serve side processing.  Can hold client specific static data.
2. Client Cache (proxy) - saves on server side processing.  Can only hold generic static data.
3. Server side Cache (Redis) - saves on computing resources by not having to connect to data sources on every request.  Useful for static share data.



Friday 22 November 2019

Azure IaaS backup Service Notes

Azure has great cost calculation tooling.  DR can be pretty expensive is running but not being used.  Having the ability to either turn on or deploy a DR environment can make massive cost savings.

I often see organisation over spending Azure dollars, basically most cost reduction falls into 1 of these 3 groups:
  1. Eliminate waste - storage & service no longer used
  2. Improve utilisation - Oversized resources
  3. Improve billing options - long term agreements, Bring you own licence (BYOL), 

Apptio Cloudability is a useful tool for AWS and Azure cost savings.  Azure has good help and tooling for cost savings.

Azure IaaS Backup:
  • Recovery Services Vaults
  • Off site protection (Azure data center)
  • Secure
  • Encrypted (256-bit encryption at rest and in transit)
  • Azure VM's or VMS' woth SQL and on on-prem. VM's or Servers
  • Server OS supported: Windows 2019, 2016, 2012, 2008 (only x64)
  • SQL all the way back to SQL 2008 can be backup
  • Azure Pricing Calculator can help estimate backup costs
  1. Azure Backup Agent (MARS Agent), used to backup Files and folders.
  2. Azure Backup Server (trimmed down lightweight version of System Centre Data Protection Manager (DPM)), used for VM's, SQL, SharePoint, Exchange.
  3. Azure VM Backup, management done on Azure Portal to backup Azure VM's.
SQL Server in Azure VM backup, used to backup SQL databases on Azure IaaS VMs.

Backing up Azure VM's must be done to the same geo location as the vault.  It can't cross geo-locations.  Recovery has to be to a different location (verify this is correct?)
Note: "Backup Configuration" setting of the Vault properties can be set to "Geo-redundant"

Azure Recovery Vault Storage choice:
LRS - Local Redundacy Store - 3 local async copies
GRS - Globally Redundant - 2 async copies in the same data region with 3 local copies- so can keep in Europe for compliance, all 6 copies are in Europe.
Update Feb 2020: I think there is also a GZRS option, check if this has changed?

Naming is absolutely key and having a logical hierarchy within Resource Groups so it is easy to find resources.  I focus on naming my resource consistently however, I've always felt "Tags" have little or no purpose in smaller environments.  In larger environments tagging can be useful for cost management, recording maintenance, resource owners, creation dates.  Lately, I've been finding it useful to mark my resource with and Environment Tag to cover my Azure DTAP scenarios.  E..g., Production, Testing, Development.

Tuesday 12 November 2019

Microsoft Information Protection


Check out my earlier Post on AIP (feb 2019)

End-to-end life-cycle for encrypting files using Azure Information Protection (AIP)


Use "Unified Labeling" to create labels



Note: Encrypting stops SharePoint being able to look into the content of the file.  The labels and name are still search but not the content of the file.  eDiscovery, Search, co-authoring don't work on AIP encrypted documents.

Cloud App Security (MCAS) Screen Shot


Sunday 10 November 2019

OpenAPI Tooling working with WebAPI and APIM Notes

Editor.swagger.io is a great tool for building OAS files.  The Swagger editor is easy to use and has a preview for changes.

VS Code is a great IDE for working with OpenAPI  specification 2.0 and 3.0 files (also know and Swagger specification).  These 3 extensions are a good idea for working with a OpenAPI specification file.


Spotlight also has an editor which is nice.  Takes a little bit of getting use to, but make complex API design first easier.

Sunday 27 October 2019

Google Tag Manger

 "Tag Manager gives you the ability to add and update your own tags for conversion tracking, site analytics, remarketing, and more. There are nearly endless ways to track activity across your sites and apps, and the intuitive design lets you change tags whenever you want." https://marketingplatform.google.com/about/tag-manager/benefits/

Tag manager is useful for adding 3rd party tags/scripts.


Saturday 19 October 2019

Evidence Based Management for Scrum

Evidence Base Management (EBM) is an management framework that uses evidence and metrics.  Execs & Middle mgmt don't get insight and normal forecasts.  EBM framework to help management move the company.  The mgmt can use evidence to drive the business forward.  Inspect and adapt using evidence.  I like to keep everything simple and move towards a more complex mature set of measurements as the easiest metrics to implement provide early value to the project.

"Evidence-Based Management is a framework organizations can use to help them measure, manage, and increase the value they derive from their product delivery."  Scrum.org

Examples Types of metrics you may want to measure:
  1. Velocity
  2. Sprint Burn downs
  3. Versioning - versions of product or bugs requiring multiple fixes
  4. Code Coverage & Code Analysis
  5. Missed bugs/escaped bugs - bugs missed in a release
  6. Issues/Failed Deployments
  7. Team happiness - capture metrics in Sprint Retrospective from the team
  8. Financial measure - cost to delivery
  9. Average User Story completion time per team (identify outliers and average for each team.  Do from started status to completed status).
  10. Average story points per team, factoring in the number of team members over multiple sprints (don't compare teams as their unit size is not consistent across teams; if you do try standardize, the teams will give more points to fake outperforming other teams).
Help management nudge the teams and company in the right directions.  To get these metrics, need to explicitly determine what the goals are, so can figure out KPI and we are measuring this right things.  Ensure the management goals are SMART (Specific: Well defined, clear, and unambiguous) or use OKR framework to define goals to figure out the metrics to collect.  Ensure the metrics are important to the goals of the company or team.   Consider using
"Objectives and key results (OKR) is a goal-setting framework that helps organizations define goals — or objectives — and then track the outcome."  cio.com

Common KPI's:  
  1. Escaped Defects
  2. Defects in the last 14 days
  3. In Progress Features and User Stories per team
  4. Team velocity
  5. Stories completed vs stories committed
  6. Tech debt (know issue, items skipped)
  7. Team burndown charts
A couple of columns and dashboard widgets for summarizing progress in DevOps:
  1. Progress by User Story Total minus remaining rolling up to features.
  2. Progress by Work Items in a feature - get user stories, tasks, spikes that make up the feature and shows % of artefacts completed within the feature.
  3. Epic Progress - each epic show the number of features and number of feature completed e.g. 3 of 11 feature completed.
  4. Number of "In Progress" feature or User stories per scrum team - very useful to check a team is using and updating artefacts proactively.
  5. Priority committed and done using priority or WSJF.
Three goals of KPI's in Scrum:
  1. Measure deliverables
  2. Measure team ability to deliver business value
  3. Measure the scrum team itself

Wednesday 16 October 2019

Troubleshooting Box.com programmatic access

This post is loosely connected to:
Using Box.com Programatically (JWT)
Using Box.com Programatically (Part2)

Troubleshooting:
Box has a brilliant Connectivity Test Tool

ErrorSystem Error: System.AggregateException
System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions) as System.Threading.Tasks.Task'1.GetResultCore(Boolean waitCompletionNotification)
Initial Hypothesis:  As my Box programmatic access was working on 1 machine but not on a VM at my client i suspected it was a difference in the environment or network connectivity.
Resolution:  I added System.Threading dlls, it did not fix the issue as i assumed it was GAC on my machine but on on the server.  No effect.
Resolution:  Check traffic using Fiddler and could see URL were not reachable/returning data.  Use the Box.com Connectivity Test tool and I could see multiple outbound https/443 URL were being blocked by the corporate firewalls.

Error:  Error Message: System.AggregateException
 Stact Trace:    at System.Threading.Tasks.Task.ThrowIfExceptional(Boolean includeTaskCanceledExceptions)
   at System.Threading.Tasks.Task`1.GetResultCore(Boolean waitCompletionNotification)
   at System.Threading.Tasks.Task`1.get_Result()
Initial HypothesisThis happened after the networking was corrected and from a machine on the Internet, so the issue is not networking.  I was not pushing the config.json file out to my bin directory, so the upload of a file was using an expired JWT.
Resolution:  The message did not give me a good inclining into the issue.  My output directory e.g. /bin folder was not getting updated with the config.json file.  I copied the correct JWT/config.json file and the error is resolved.

ErrorError Message : System.IO.IOException : based64 data appears to be truncated
Resolution:  Check the config.json file, my file got corrupted.

Error: Error Message : invalid_client
The client credentials are invalid.
Resolution:  The config.json was to a trial tenant that I previously has setup, get the correct config.json file.

Error: Error Message: invalid_grant
 Stact Trace: Signature verification error. The public key identified by "kid" must correspond to the private key used for signing.
Resolution: The Box admin team had revoked the JWT, a new config.json file was generated and fixed the issue.

Sunday 6 October 2019

Common Azure Services

Azure Key Vault - Secure config storage and retrieval
There are SDK's for working with Azure Key Vault such as the "Azure Key Vault secret client library for .NET (SDK v4)".  Extremely easy to get secrets from the secure vault using C#.

Azure Storage
Microsoft Azure Storage Explorer is a great tool for reviewing your Azure Storage and in the case below I used it to add some Azure table storage for a demo customer list.
There is also a web edition of Storage explorer that is in preview as of 18 Nov 2020.

App Service - Host Web sites or WebAPI

Azure Artifacts - Code and share your packages via NuGet, and npm packages with Azure Artifacts for more reliable and scalable builds

Azure Data Factory (ADF) - Basically PaaS fully managed Azure ETL/SSIS.  Many connectors to ingest data.  Send to Azure Synapse Analytics.

Azure Big Data


Azure Synapse Analytics  - is a managed PaaS solution that brings together ADF, Data Lakes (both Storage and Analyse) and Azure Data Warehouse under single managed solution.  Easier than the individual pieces and scales as you need with almost unlimited capability.  Azure Purview - discover and analyses all your data, integrates with AIP.  Azure Synapses simplified analytics, sold as a PaaS (Serverless) or dedicated.  Easiest way to draw data out of Azure Synapse is Power BI.  Easy to bring data into Azure Synapse from CosmosDB and SQL databases (no affect on performance) can automatically push the data into Synapse, no need for ADF. And the data is in live time.



Azure Application Configuration - Feature Toggles/Feature flags are extremely useful in code.  This service is great for turning on experimental features, operation feature, environment/release features, and security features.  Feature Toggles (aka Feature Flags) (martinfowler.com)  Use for feature flags whereas KeyVault is for secrets.



Azure Resource Explorer - Documentation on Azure API's and ability to call the APIs.

Azure Policy - Azure Policy Templates can be custom created that apply rules to your subscription.  There are a lot out of pre-canned policies.  You can enforce naming conventions, tagging standards, enforce deployment of resources into specific regions, ....

Sunday 29 September 2019

OAuth for Custom Connectors

Problem:  I have an APIM end point that has both the subscription key and OAuth2 setup for security and I need to connect Power Apps.

There are basically two parts to this problem, setting up your Power App to use OAuth and then also passing in the subscription key on every APIM end point request.
  • This post assumes that both the APIM App registration and the Client App have been registered on Azure AD.
  • Check with postman can access the end point as shown below.  Postman will authenicate and use the bearer token (OAuth) and the subscription as shown below to test
Client ID: 5559555-555a-5558-5554-555d4
Auth URL: https://login.microsoftonline.com/555fd555-555f-5554-5551-555b444c3555/oauth2/v2.0/authorize
Token URL: https://login.microsoftonline.com/555fd555-555f-5554-5551-555b444c3555/oauth2/v2.0/token
Refresh URL: https://login.microsoftonline.com/555fd555-555f-5554-5551-555b444c3555/oauth2/v2.0/authorize


We now know the APIM is working with OAuth and the subscription key.
Configure OAuth from Power Apps:
  • Once Postman can get valid 200 responses using OAuth2 and the subscription key, setup the custom connector.  I could not get the Azure AD connector to work so i used the OAuth connector as shown below: 

Add a policy that will add the subscription key to every https request to the APIM/resources as shown below.

Updated 2 Feb 2020 - Example of custom OAuth code flow authentication for a custom API's using AAD security.

Note: Deploying Custom Connectors in solutions always seems to be an issue.  It is a good idea to keep all connection references in a separate Power Apps Solution.

Saturday 28 September 2019

Flutter vs Xamarin vs React Native vs PhoneGap

Overview:  As a solution architect and CTO I need to choose the right approach to help our clients select the correct technology and approach to delivering solutions.
Flutter vs React Native vs Xamarin
My History of Building Mobile Applications:
  • Originally, building mobile apps required building two code bases generally for Android and iOS in their own language.  Xamarin also allows you to write in C# for a specific platform.
  • Next came a single code base that compiled down into each native platform (Droid and iOS), then we would customize for each platform.
    • I like Xamarin Forms as I want native apps with a single code base (C#).  One can also write separate code bases for each of the 2 main mobile platforms. 
    • React Native is built using Facebook's ReactJS library.  React Native like Xamarin Forms uses a single code base that compiles a iOS and a Android app.
  • Note: An alternative around this time was to use HTML5 with PhoneGap and deploy to both mobile platforms, native controls could be used but this was once again a split in the code base.
  • Flutter is the newest of the options.  Flutter is from Google and uses it's own DART language, it then can be compiled into various formats including iOS and Android.  Flutter goes back to the single code base that pushes native apps to multiple platforms.  Including the Web.  It is fast, looks good, great native interaction.

More Info
Flutter vs React Native - 

Alex Zubkov - 6Aug 2020

Friday 27 September 2019

Basics of Flutter

Overview:  Flutter is a UI toolkit built by Google used to create native mobile applications (and web sites (Hummingbird)) using a single code base.  Historically, I have tried to keep a single code base in the front end using Angular, KO or ReactJS for my websites and used PhoneGap to build pseudo native apps on Android and iOS.  This allows me to have a single code base and also deploy to Droid and iOS with minor tweaks on PhoneGap.  
Tips:
  • There is a Flutter add-in for Visual Studio Code.
  • Flutter uses "Dart" as it's programming language.
  • Dart is a strongly types object orientated language that compiles into JavaScript for websites, and "natively" for iOS and Droid.
  • Also can compile code for native Windows, Mac, Chromebook.
  • Everything is a Widget (layout widget, elements e.g. image, text, or a gesture widget (listens for actions like a tap)),
  • Add widget together to make a custom widget,
  • Widgets are either stateless e.g. picture or stateful e.g. textbox
  • import the material.dart to provide basic building blocks
The more I see of Flutter, the more I like it.  It is quick to build mobile apps and looks fantastic.

Cons:
  • Pretty new & hard to find skills
  • Slightly bloated on a native app - not noticeable to end users
Basic Environment setup:
  1. Surface Windows 10 Pro
  2. Visual Studio Code
  3. Flutter SDK
  4. Android emulator (I didn't setup iOS)
  5. Android device to load the package to try it out on a phone

Emulator Running using Flutter

Wednesday 11 September 2019

Scrum - Part 5 - Certification PSM 1

Overview:  The two main bodies that do the most credible Scrum certifications: Scrum.org (PSM) and scrum alliance (CSM).  This post is my notes from Professional Scrum Master (PSM) 1 exam preparation. 

Note: Certifications teach the mechanics/ceremonies, the number 1 role of a scrum master is to ensure their team is happy.  Ensure team members are heard, happy, feel safe and are open to taking risks on the project and be encouraged to do so.

The key to PSM1 is to read the Scrum guide several times (it's short), it's short but you need to understand it as you answer to it's specifications not what other guides or books recommend.  It's a framework and the certification is for the Scrum framework only.  There are a lot of great resources out there including on Scrum.org's website.  The open assessments are excellent.

My PSM1 Scrum Certification Notes:

Three Roles, Five Events, Three artifacts, Five Rules (REAR)
Three Roles:  1) Product Owner (PO), 2) Scrum Master; 3) Development Team Member
Five Events: 1) Sprint - 4 weeks 2) Sprint planning – 8hrs 3) Daily scrum/stand-up – 15min 4) Sprint Review - 4hrs 5) Sprint Retrospective - 3hrs
Three Artifacts:  1) Product Backlog 2) Sprint Backlog 3) Increment
Five Rules: 1) Done 2) Time-box 3) Sprint Cancellation 4) Team size 5) Effort.
  • The Scrum Framework is based on Empiricism = Learn from our experiences.  Three pillars uphold every implementation of empirical process control: 1) Inspection 2) Adaption 3) Transparency.
  • Five Scrum Values: 1) Courage, 2) Commitment, 3) Focus,  Openness, and 5) Respect  (CCFOR)
  • On any project, there is one product backlog, and one Product Owner (PO), regardless of how many Scrum Teams work on a product/project.
  • Only the PO has the authority to cancel a Sprint before the Sprint is over.
  • The Product Owner is responsible for managing the Product Backlog, which includes that the Product Backlog is visible, transparent, and clear to all, and shows what the Scrum Team will work on next.
  • The Product Owner decides what makes the most sense to optimize the value of the work being done by the Development Team.
  • In order to effectively be the product value maximizer, the entire organization must respect the PO's decision authority via the PBL. Dev team only works on PO’s instructions via Product Backlog Items (PBI’s).
  • All Product Backlog Items must:
represent a requirement for a change to the product under development, and
have all of the following attributes: description, order, estimate, value. (DOEV)
  • DoD – Definition of Done is created by the Development team members.  Estimating is done  by the Dev team on each PBI.  Ordering and Value of PBI’s are owned by the Product Owner.
  • The Product have one Product Backlog, regardless of how many teams are used.  Product Backlog Refinement = Product Backlog Grooming.
  • Sprint goals are the result of a negotiation between the Product Owner and the Development Team. Sprint Goals should be specific and measurable.  Each sprint needs a Sprint Goal.
  • The heart of Scrum is a Sprint, a time-box of one month or less during which a "Done", useable, and potentially releasable product Increment is created. This applies to every Sprint.
  • The duration of a Sprint is fixed and cannot be shortened or lengthened.  A sprint is over when the time-box expires.
  • The product increment should be usable and releasable at the end of every Sprint, but it does not have to be released.
  • A new Sprint starts immediately after the conclusion of the previous Sprint.
  • Development Teams are cross-functional, with all of the skills as a team necessary to create a product Increment.
  • The Development Team uses the Daily Scrum to inspect progress toward the Sprint Goal and to inspect how progress is trending toward completing the work in the Sprint Backlog.
  • Development Teams typically go through some steps before achieving a state of increased performance. Changing membership typically reduces cohesion, affecting performance and productivity in the short term. 3-9 Development team members.
  • The Scrum Master enforces the rule that only Development Team members participate in the Daily Scrum.  Scrum Master ensures Dev team know how to run daily scrums.
  • A Scrum Master is a servant-leader for the Development Team. Facilitation and removing impediments serves a team in achieving the best productivity possible.
  • The Scrum Master ensures that the Development Team has the meeting, but the Development Team is responsible for conducting the Daily Scrum. The Scrum Master teaches the Development Team to keep the Daily Scrum within the 15-minute time-box. The Scrum Master enforces the rule that only Development Team members participate in the Daily Scrum.
  • Sprint Review has two aims: 1) Review the dev increment from the sprint 2) Adapt the Product backlog.
  • Scale Scrum using the Nexus framework.  All teams use a single Product Backlog, Single Product Owner shared across scrum teams.  DoD(“Done”) mutually agreed by all Dev Teams on the product.
Disclaimer: A lot of this information comes from Scrum.org.  This post is my notes and how I thought about the PSM 1 certification.  If you disagree or see any errors, please post a comment or contact me to update. 

My other posts on Scrum:
Agile for SharePoint
Scrum for SharePoint - Part 1
Scrum for SharePoint - Part 2
Scrum - Part 3 - Scaling Scrum 
Scrum - Part 4 - Kanban and Scrum 
Scrum - Part 5 - Certification PSM 1 (This post)

Thursday 5 September 2019

C# to connect to SFTP Server

Overview:  This is a quick way to connect to an SFTP server programmatically.

Steps:
1. Create a C# project in Visual Studio, I used VS2019 and .NET core in this example.
2. Add the SSH.NET Nugget package to the project as shown below


3. The code snippet below is to simple to explain, you not they I'm using the SftpClient class from the Renci.SshNet Nugget package to do the Sftp file transfer.


Azure blob storage supports SFTP, you can use cheap/redundant blob storage and accepts SFTP - secure, cheap way to setup the server side of SFTP.

Wednesday 4 September 2019

SCRUM - Part 4 - Scrum with Kanban

Overview:  Kanban is a good lean framework for delivering software projects.  I like to lift parts of Kanban into my Scrum teams.  Kanban focus on flow/process improvement which can be useful to leaverage.

Kanban boards - It's common to use Kanban boards so your team can use the sprint states for PBI's to merely show progress and helps identify PBI that are struggling to get to a done state.  I pretty much always use additional tools when doing sprint.  Another common example id story points (it is not part of Scrum but useful for PBI estimating).
Kanban allows items to be added at anytime and this is not the way to go with Scrum that has a fixed set of PBI for the duration of the sprint.  Often the Scrum team gets support tickets if they are a product team development team.

Problem: So for instance, at a SaaS provider the team had to do 3rd line support tickets that came in as a urgent priority.  The team were using Sprint and it wasn't working for delivering planned work enhancements.  Everything was jumbled and deadlines were getting missed. 
Resolution:  30% of the 2 week sprints were placeholders that could enter the Sprint during the sprint.  This worked really well and actually started at 40% and was reduced once the team got confidence.  This allowed grooming of the support tickets so only proper 3rd level support tickets were entering the sprint.  It takes commitment but this worked well to solve the specific problem.

Basically in the same team use Scrum for product development and Kanban for support.

Agile for SharePoint
Scrum for SharePoint - Part 1
Scrum for SharePoint - Part 2
Scrum - Part 3 - Scaling Scrum
Scrum - Part 4 - Kanban and Scrum (This post)
Scrum - Part 5 - Certification PSM 1

Thursday 29 August 2019

SCRUM for SharePoint - Part 3 - Scaling scrum

Introduction:  Scrum teams work best with relatively small teams (3 to 9 people) made up of high-quality individuals.  Scrum teams are made up of: a Scrum master, a product owner and up to 7 team members.  But in large enterprises or Software companies that deliver a large product, there is a need to use many more people to deliver working quality software.  Scaling Scrum is an option, and there are a couple of frameworks to help.  Agile and Scrum struggle to scale, this post outlines possible pathways to scaling scrum.  

PM's vs PO's: In smaller firms and projects, the Product Owner(PO)/Product Manager (PM) are often the same role.  In larger projects these are both key roles but with a very different focus.  PM are outward looking, know where the market is going, what do our customers want.  The PO's focus on what are the piece we will build internally.  For scaling I generally favor PO's have 1 or maybe 2 teams at the most.  I always want a single source of direction from PM.  So you can have 3-4 PO's for a single PM.  And often PM is a function consisting of several PM's. 


Scrum of Scrums (2-3 Teams)
By adding an additional event daily for key people from each Scrum team called a "Scrum of Scrums".  The meeting requires key stakeholders from each team to attend so that the integration between teams picks up dependencies, and ensure cleaner, easier integration.  Scrum of scrums is the simplest form of Scrum scaling and requires a 15-minute meeting that I follow: What we did, what we are going to do, and blockers from all attendees format.

Scrum of Scrums helps reduce the integration unknowns and ensure various teams can solve dependencies in a timely fashion.  The problem is there are diminishing returns as more teams get added, and this works well if there are 2 or 3 teams working on a single project and these teams self-organise.  In my opinion, more than three teams require a more robust Scrum-based framework like Nexus or SAFe.  Keep "Scrum of Scrums" meeting small with a technical scrum team member from the scrum teams.  I do them exactly like a daily stand-up Scrum meeting with technical items moved to the backlog.

Scaled Agile Framework (SAFe) (3 or more Teams)
My experience with Scrum scaling using SAFe while I worked with a big four consultancy firm and with several clients.  SAFe is the most popular approach for scaling Scrum.  SAFe is valuable, and like all these frameworks, it comes down to being implemented well.  SAFe is not easy to get working well.  Within product business with fixed scrum teams, it can work better.  It is a way for an organisation to be ready for Scrum and coordinate large projects to guide direction.  The product managers to product owners are often at sea with what they are trying to achieve.  SAFe focuses on Agile software delivery, Lean product development and system thinking.   The author here outlines that SAFe is for a way of scaling Scrum.  Program Increments (PI) are how direction is agreed for the next period (often a quarter).  Scrum owners get backlog items and then plan 4-5 sprints to get their portion of features/backlog items done.  SAFe only tends to work in my experience if "Organisation Readiness" (scope of work for the PI is reasonably clear and the priority of features is agreed, and the teams are ready to understand their sprint work items) is done before PI planning.  The diagram below shows the centre as Scrum teams and the other SAFe ceremonies' outer layer.  Beware, PI planning is challenging.





Note: Organisations can jump on the Scrum framework and try to make all teams scrum teams.  This can be great and is generally a good sign; however, beware of managers/management comparing teams spring volatility thru story points.  All the teams tend to increase their story points when compared, so they look like they are improving and beating other teams.  Focus on delivering quality in a trusted, rewarding team.

Note:  Using the same Scrum in every team is not ideal.  I like to let each team decide their rules and find their way.  The culture win from team members is where Scrum's value lies.  I like to leverage other frameworks like Kanban and story points.

Note:  I do like to have repeatable CI/DC tasks to be agreed across Scrum teams, even though the team may take longer, it works better from expertise and consistency in software delivery.  The point here is tune to match you requirements.


SAFe Levels
Source Alan Reed (with thanks)

Program Increment (PI) planning - 2 day event.  Day 1, have draft features and User Stories.  Al stakeholders are aware.  Day 2, team breakouts, confirm understanding and what can be achieved, finalist what is in th ePI, what are the risks and have confidence vote.

Nexus (3 or more Scrum Teams)
Nexus is for scaling scrum teams; I have never used it, but I like the thoughts and these guys know their stuff.  I like the concept of an integration team.  I have used this idea with floating team members with high degrees of specialization in CI/CD.  Having a central integration team ensures all the pieces come together for the deliverable.  Mixed with Scrum of Scrum meetings, the number of teams can be increased and for huge products having a central architecture/integration team is a great idea to reduce dependencies and ensure alignment from the various Scrum teams.

LeSS (Large-Scale Scrum) framework LeSS 

Disciplined Agile Delivery (DAD) extends Scrum

Summary:  Scrum works well for small Agile teams; once going over a single team, use daily Scrum of Scrum meetings to scale to two teams.  If the product is large, a dedicate architecture team that is responsible for build releases, integration of workstreams and overall architecture and using daily Scrum of Scrums to deliver allow for larger Scrum-based projects.  
  1. Small - Use Scrum (1 team)
  2. Medium - Use Scrum-of-Scrums (2-3 teams)
  3. Large - Use SAFe or Nexus (3 or more teams)
SCRUM Blog Series:

Sunday 11 August 2019

Send email using O365 in an Azure Function C#

Create an Azure Function and the basics for send an email are below:

MailMessage msg = new MailMessage();
msg.To.Add(new
 MailAddress("someone@somedomain.com", "SomeOne"));
msg.From
 = new MailAddress("you@radimaging.co.uk", "Paul Beck");
msg.Body
 = "This is a test message using <b>O365</b> Exchange online";
msg.Subject = "Test Mail";
msg.IsBodyHtml
 = true;
SmtpClient
 client = new SmtpClient();
client.UseDefaultCredentials
 = false;
client.Credentials
 = new System.Net.NetworkCredential("your user name", "your password");  // Generate a key so pswd does not change or if account uses MFA
client.Port
 = 587; 
client.Host
 = "smtp.office365.com";
client.DeliveryMethod
 = SmtpDeliveryMethod.Network;
client.EnableSsl
 = true;

Tip:  Azure SendGrid is a great service for sending out emails.  Amazon have SES (Simple Email Service).