Thursday 7 December 2023

Upgrading Two web applications and verifying using Playwright - super fast

Overview: A couple of my internal recent projects all clipped together to allow my to upgrade two websites to .NET 8. And verify the upgrade and commit to source control in a regulated controlled manor and it took less than 30 minutes.

I download the latest version of Visual Studio 2022 Enterprise edition and noticed an option to upgrade my .NET projects, so I clicked it. The .NET Upgrade Assistant downloaded and installed upgrade in Visual Studio.  The upgrade is done using a vsix template import: Microsoft.NET.UpgradeAssistant.vsix

I thought I may as well upgrade my two current .NET projects:

1. App Service on Azure running Blazor .NET 6, using TFS for source control and published using my Visual Studio profile.

Once the upgrade was applied took 10 seconds and i chose LTS  .net8, I published.  Code is still not checked in.  I has a quick look and the  app loos to be running correctly in a browser. 

2. Static Web App hosting a Blazor .NET 6 connected to Github and published as a gated checkin using git Actions. Upgrades, and when I checked into the main github branch, the action fired and upgraded the static web app.

Verify Build:

So I had checked both apps where running using the good old open in a browser and look around.  A few days ago I was playing with Playwright and my testing covered validating the App Service website can send email, is running and text is visible, it also checks a Mendix low code website and lastly it looks at the Static Web App to validate it is service pages.  I did this is Visual Studio Code. 

The tests tell me both applications are running, verifies WAG compliance on 1 app and also checks a Mendix website is running.

Summary:  By re-using the test project I could quickly verify the project upgrades and the first project still requires a commit to complete but it is way safer than my direct to production gated checking done on the static web app.


Mendix - Part 2 - Diving deeper (E2E automation testing of Mendix using Playwright)

Mendix Series

1.  Overview of Mendix 

2. Mendix - Part 2 - Diving deeper (This post)

AI with Mendix (current version Mendix 10.5.x):

  1. Logic bot - recommends what you are likely to do, like a copilot as you go along building the app
  2. Performance bot - shows redundancies, recommends performance improvements 
  3. Chatbot in beta

Playwright is a good UI testing tool for Mendix:

For more advanced applications, Playwright is a good testing framework that can help developers know their code is running end-to-end, useful for monitoring applications and behaviour, and also can be used as part of the CI process to validate Mendix end user accessibility as shown in this mp4 (7 minutes - good video).

Thoughts:

I needed to change from US format to UK date time format:
Community has the answer: Mendix Forum - Question Details

Tuesday 5 December 2023

Playwright series - Post 1 - Overview of E2E testing using VS Code for Low Code

Setup: I have installed Node 20.100.0 and the VS code extension for Playwright.  The installation and getting started guides are clear and of a high quality.  https://playwright.dev/docs/intro  I am running on Windows 10 Surface 4 with 16GB.  I am using TypeScript (ts) as it is the default and the recording mechanism works well with ts.  Previously I have use C# as it's my language of choice but I feel ts is easier to maintain and there is no need for complex logic/functions in end-2end (e2e) UI testing.  New features always come out in TS/JS first.

Thoughts:  Postman is easy to use, fast, configurable and flexible.  UI e2e testing allows me to know my app/sites are working as expected.  Manual testing is time consuming, and amending automated tests can be hard.

Setup Reminder:

1. Install the Playwright extension using VSCode (once at initial setup)

2. Open a new folder in VSCode, and open the "Command Pallette" (once for each new project)

>Install Playwright

These are the defaults and will use TypeScript as the base language, stick to this it is the simpliest.  VSCode builds the default file scaffolding as shown above


3. Create your first New Playwright UI Test:

3.1. Record new


3.2. Enter a URL in the recorder browser, and click around (optional add Asserts) 



3.3. Save the Test

3.4. Execute the test

The Green tick can be used to quickly run the test locally.  In the "Test Results" terminal, you can see the same test was run 3 times, my configuration is set to test Chrome, Firefox and webkit.

Why Playwright?

  • Easy to understand/follow,
  • Easy to record,
  • Open source, 
  • No paid licencing, 
  • Faster than Selenium,
  • Various coding languages supported (bindings for C#, Python, Java, JS, TypeScript),
  • UI verification using screenshot and AI to minimize flakiness/static DOM reliance,
  • Ability to debug and trace is strong,
  • Can do API testing,

Possible Playwright UI testing Layers: 

  1. Full regressions goes into detail and runs in Chrome, Firefox, webkit/devices 
  2. Check in tests are comprehensive on a single browser for code check ins
  3. Continuous testing - record logging in and reading from a db and calling an API.  Can write to logs i.e. Dynatrace, Azure Monitor, SolarWinds using API's,  Doing this every 5 minutes will tell you at a high level if the service and it's dependencies are running and if there is a performance change.
  4. Developer can write detailed local tests when working in an area, and reuse them if he comes back and changes any code.

Testing Challenges:

Unit testing is a challenge in low code - while they are fast and ideal for C# or code, not easy to implement for Low code.  Their is a new beta feature for component testing in Playwright, i don't think it adds value.  API Testing - I use Postman for API testing including controlling my CI.  Low code automation testing is hard in the Power Platform, E2E playwright testing in context works pretty well.  API's/ network traffic needs to be mocked.

Challenger products:

  • Selenium - QA's highly skilled here
  • Cypress - Devs tended to use this over Selenium
  • Specific products like in Power Platform Test Studio and ...
  • I also like BrowserStacks low code testing, especially if no CI/CD and can manage from here and use different low code technology.  

Summary: Generally I'd go for Playwright over all the others. 

Thursday 23 November 2023

UK Tax - MTD for ITSA Updated

The MTDfITSA saga has been running for many years, and as of 23 November 2023, this is the current state:

Go live: start 5 April 2026 and apply to a way smaller base than the original intended group of 4.5 million users.   These are the key changes and points: 

  • Who is in?  Self-employed people and property landlords (outside of Limited companies) need to register.  Initially, landlords with joint property ownership are not mandated to join (HMRC must provide more information).  Individuals MTDfITSA that previously did a Self Assessment can have ten or multiple self-employment businesses and 0 to 3 (4 - Foreign property is unclear) property businesses under MTDfITSA (Ord UK property, FHL UK, FHL EEA, Foreign property).
  • Who is out?  Partnerships are out, and non-dom status (has specific rules).  If a person is currently self-employed but has complications such as joint individual property ownership that generates rental income, being a partner in a partnership - will mean you are outside and continue to fill in self-assessments.  Trusts/estates, LLP and Ltd are out at the start.  Specific exemptions for income from foster care and individuals without a National Insurance Number do not need to register for MTDfITSA.
  • EOPS concept is removed from MTD for ITSA
  • Quarterly submissions are now cumulative booking numbers for the business.  The quarterly figures are cumulative per business for the year, i.e. Q2 submission consists of all data from Q1 and Q2.  Unlike VAT, ITSA quarters are cumulative during the year, whereas VAT is for a specific quarter.
  • For each self-employed business or property business, a quarterly per-business submission is due to HMRC within 30 days of the quarter’s completion.  Declaration per business is still required (31 January the following year). 
  • A single crystallisation submission for the user at the year-end is due 31 January each following year.  A yearly declaration is also required.

·       Digital links/keeping (digital records).  Can’t re-key/copy and paste.   There is no requirement to use bank feeds/PDS2 data.  Some booking software firms will likely file quarterly MTDfIT returns for each self-employed (and property) business.  Spreadsheets are an acceptable form of record-keeping.   Excel and bridging software are sufficient for the source for filing.  Recording sales can use daily sales totals for the digital source but, ideally, link to the raw input system.  

·       Quarterly submissions requiring corrections are cumulative now, so you merely correct a mistake by adjusting in the next period/quarter.

  • Starting 6 April 2026, this shall apply to less than 750k users
  • Self-assessment people with a combined income of £50k for the two years previous need to register for MTDfITSA; the plan is to drop to £30k the following year.
  • There is a new penalty system.  Late payment has interest penalties, basically no penalty for 15 days late, then 2% for 16-31 days and then 4% is paid after this.  4% is on the outstanding balance from the day past due.  Payment is due the following year on the 31 January.  There are points, fines, and interest charges.  Penalty points for late filing, missing four quarters in 24 months, is £200 penalty.  Record of last 24 months retained. 

Summary of comparison between ITSA & SA:

SA

ITSA

1 SA return is done each year per person.

Four quarterly returns via approved software per business required a digital record link to the underlying transactions per self-employed business.

Ask HMRC to explain how to correct business totals for the year as 1.  missing types of expenses, 2.  As EOPS was removed, assume you have 30 days to finalise business accounts from business year-end, bringing this forward nearly eight months.

Crystallisation/Finalisation using HMRC-approved software per person, not per business.

Paper submission due 31 October for the previous year.

Online submissions are due 31 January – 9 months after the financial/tax year-end.

Approved Digital Software to submit (no paper returns).

One month after each quarter, submit the quarterly cumulative return.

No EOPS is due on 31 January the following year with new rules.  

The Crystallisation/Finalisation/Final Declaration is also due 31 January the following year.

Quarterly MTDfITSA is done per business and is due one month after the quarter period ends.  Property business quarters and year-end run in the same cycles as personal tax, starting on 6 April and ending the following year on 5 April.  As MTDfIT begins on 6 April 2026, the four quarterly submissions for the 2026-2027 tax year and filling due dates are:

Qrt start date

Qtr end date

Qtr submission due date

6 Apr 2026

5 July 2026

5 Aug 2026

6 July 2026

5 Oct 2026

5 Nov 2026

6 Oct 2026

5 Jan 2027

5 Feb 2027

6 Jan 2027

5 Apr 2027

5 May 2027

 

Self Assessment filling options:

  1. Most people use the current XML online filing done on 31 January after the personal tax year.  +-11 million people 2026/2027.
  2. Some people still use paper-based self-assessments due Oct after the tax year.
  3. MTD for ITSA will be due one month after each business quarter, and the finalisation/crystallisation process is due 31 January, the year after the personal tax year.  As HMRC calc tax, declarations are required from the taxable individual for the year and after each quarter per business. +-500k people

Tuesday 21 November 2023

Review of Browser Stacks Low Code Automated test tool

Overview: Low code testing relies heavily on complete UI end-to-end testing.  It needs to be fast, flexible, quick to correct, scale-able, highly configurable and BrowserStack's low code test tool is in beta and definitely on the right path - for me, it needs a few features.  I ran my testing against customised apps created on three platforms:

  1. Mendix low code,
  2. Microsoft's Blazer hosted on Azure Web App, and
  3. Canvas app within Power Platform. 

Tip: I've looked and use BrowserStack for many years and it has moved from being an device emulators infrastructure testing provider to a full ALM testing platform.  The low code Browser stack has a recorder to capture steps.

Where does Low Code fit into Browser Stack:

Image1. Low code automation works well as part of the full BrowserStack Platform or just using the product by itself.

Pros of the Low Code BrowserStack Product:

  • The local recording feature is easy to set up and use
  • Seamless integration with the cloud version running on BrowserStack's infrastructure
  • Logical layout of UI, little to no training required
  • UI validation using the DOM or, more importantly, screenshots using BrowserStack's AI verification (required further review) has the potential to self-heal as in the screen changes, but the validation can be smart enough to understand it is just an updated screen (example a single colour in the page and the position of the name is moved).
  • SDK is available to work with the full BrowserStack platform.
  • Not Low code specific but BrowserStack generally has the new phones included in their offering within days of being release.

Cons:

  • Provide a webhook or allow for a REST client call as a step (I'd want to log directly from the test run into Azure Monitor)
  • More run options, I'm sure it's already on the road map, but the ability to run every hour for continuous monitoring.
  • Refresh tokens on a schedule (allows you to not use MFA such as SMS codes or Authenticator).
  • Make it clear if the run is from the local or the browser, and keep the historical logs for both together.
  • Export results - I could not find this, but it would help compare step performance.
  • I use DevOps, I'm unlikely to take the whole BrowserStack platform unless i need the emulators which is what BrowserStack was originally famous for.

Summary: This is an excellent tool for testing; the low code product was still in beta when I reviewed it.  It is a nice tool and has the potential to disrupt the market.  I feel Playwright is a better point solution and integrates to CI/ALM platforms.

Referenceshttps://www.browserstack.com/low-code-automation/features?utm_source=google&utm_medium=cpc&utm_platform=paidads&utm_content=668760067900&utm_campaign=Search-Brand-EMEA-Navigational&utm_campaigncode=Core+9045914&utm_term=e+browserstack

Other

Image 2. Emulate a Samsung Galaxy phone on Android using the Chrome browser.

Thought: I like BrowserStack's reporting, clean and simple on tests and easy to get the history.

Wednesday 15 November 2023

Ignite 2023 - Microsoft Fabric - Introduction

GA: Prepare your data for AI innovation with Microsoft Fabric—now generally available | Microsoft Fabric Blog

Everything brought in and available for analysis in a single Service.  Microsoft Fabric is a unified platform that brings all your analytics under a single service.

OneLake - per Fabric instance.  Stores all data within the SaaS data-lake (scales itself), automatically index data, abides by AIP rules/labels.  Intelligent data foundations.

All data is held in the Delta Parquet format (same format for any source).  Data is ready to use.  One copy of data.

SaaS single service, no need to bring pieces together, one data sources don't need to move and slice data.  Can query using multiple approaches. Can create a short cut to files/folders/databricks and it becomes part of OneLake.  Data stays at original source but can be worked with.

Mirroring in MS Fabric - get same benefits of shortcuts, but can connect to databases including SnowFlake, Dataverse, AWS S3 buckets & CosmosDB.  Mirroring is always up to date in real time.  Data is stored in Delta Parquet format so can now use.  With these 2 approaches can use nearly any source. lots of connectors so could use: Dataverse, Cosmos, Snowflake, SQL Server, blobs on S3,..  Then can write queries across all the data. 

Copilot in Microsoft Fabric will help bring in all the data, and help analyse the data.

Copilot for Power BI is amazing for building reports - need tp play with it.

Ignite 2023 - Keynotes - Summary

15 & 16 November 2023

Good Keynote: AI is driving a lot of innovation.

Microsoft Fabric in GA (25k instances already).  New feature is 'Mirroring' - copy cosmos/SQL et al into Fabric.  OneLake.  Can bring lots of data from multiple sources into Fabric in near rela time.

MS Teams (320 million users):  Bring everything to the user in one place, not just communications but a canvas for apps.  Good place to build line of business applications.  New teams app - way faster, easier to use.  Teams Premium - intelligent meeting recap is working well, can integrate recap with copilot. 

Copilots - needed for nearly everything you do.  understand context of where you are.  MS has hundreds of copilots.

Copilot Studio - Custom GPT's, can add plugin's to add your own data, can hook into an enterprises unique data.

Copilot for Service - allows agents to get information to provide support, looks interesting.

Personal thoughts: AI is going to be a mega trend that will influence the world hugely, there will be lots of weird decisions on the journey. Currently, it is mainly proving useful as another tool to help improve existing processes.  AI helps me work faster and spend my time on exploration rather than bring base understanding together.

Part 2 Keynote:

Microsoft Graph gives the copilot context within an organisation.  Use plugins to add enterprise data or Open AI GPT's.

Surface Pro Hub 3 released  - looks good, rest of the hardware looks higher spec.

SharePoint Premium - improved knowledge and content management on SharePoint.

Copilot Studio - useful to build internal copilots.  1. Connect copilot to other systems using plugins or GPTs 2. create workflows 3.  Controlled by IT.

Copilot Studio overview

Mesh - Teams can join immersive experience/events, not sure what this means.  GA expected Jan 2024.

Microsoft 365 copilot release to GA 1 November. 

Why Copilot? MS are describing it as a productivity multiplier.  Allows users to be more productive and more creative.  Improves quality of work, avoids searching - as expected.  Makes mistakes but is getting better.

Microsoft Copilot - Bing chat is just MS copilot

So when logging use Entra Id (Azure AD), get contextual enterprise information.  Inherits security and privacy policies.  ACL controlled.  Includes MS graph and Apps.

Try it out: Copilot.microsoft.com  - Chat data is not saved/stored by MS.  Change from "web" to "work".  Also available in Windows taskbar

Ability to use copilot to pull in information, show more graphs, get data.  Good word example getting data from a pptx.

Great example of querying Excel using copilot, created a pivot table. Contrived but looks good, added rainfall from web to look at sales.  Powerful.

Never thought of copilots for being a participant in a meeting - might us amazing.  In teams meeting, takes real time notes, and pulls in info and summaries points for next meeting.  Add as a collaborative partner, can visualize discussions on a meeting whiteboard.

Loop - flexible collaboration, now with copilot.  Not my area but sounds impressive - i don't get it.  People, working with people, now also working with copilot.  Okay.

Copilot for Sales - looks promising.  Hooks into existing CRMs.

Copilot for Service - working with customers, get data that is correct to solve customers problems.  Concise summary, helps craft emails, updates CRM.  Looks very interesting!

Viva - Microsoft copilot dashboard powered by Viva - not sure on this topic.

Summary: People using copilot don't want to loose it.  AI is bringing big changes to many industries.  Promise is to take the grind out of work - sounds great let's see.  Copilot/AI will be a tool and shape how we work.

Monday 30 October 2023

Thoughts on Logging and Monitoring

Overview:  I mainly work in the Microsoft stack, so my default for logging as Azure Monitor.  Log Analytics/Workspace and Application Insights fall under the term Azure Monitor.  

Going forward MS are storing App Insight logging data within a Log analytics instance.

There are 4 options for displaying/analysis logs in Azure:

  1. Azure Dashboards
  2. Power BI
  3. Grafana
  4. Workspaces

SIEM tools take in logs from various sources such as Azure Log Analytics, Defender, other vendors Prometheus logs or Open Telemetry.  

Grafana can be used on most SIEMS including Dynatrace, NewRelic, Microsoft Sentinel, or Azure Monitor.  Grafana supports PromQL and has fantastic dashboarding.

Azure DDoS Sentinel Overview:

Microsoft has the "Azure DDoS Sentinel" service that can help protect your network endpoints from DDoS attacks.  Common DDoS attacks all basically use hundreds of bad actors to flood traffic into you architecture to overwhelm them.  Restricting traffic from the bad actor sources is key.  Mixing the Azure DDoS Sentinel Service with Azure WAF, allows us to identify the bad actions and just block these bad attackers.

DDoS - Increasing, multiple bad actors try overwhelm your resources.  Rate limiting can help, but ideally you want to let thru valid traffic and block bad traffic.  Azure DDoS Sentinel service can be coupled with WAF to protect correctly from DDoS attacks.  Normally UDP flood attacks, also protects HTTP(s) flood and TCP Flood attacks. Covers level 3-4 layer attacks.

Two SKU's:

  • DDoS network protection: used on a VNet, service will work out and protect your public nodes.  Can put this in from for Azure WAF are Azure Firewall, After Front Door.  
  • Cheaper Alternative is the DDoS IP protection, has most of the features and if only specific IP, like a web traffic IP it's a good option.
More Info:

Sunday 29 October 2023

Mendix Overview

Overview:  Mendix is a low code app builder that is a leader in the market.  While I predominately use the Power Platform, I think Mendix can be a good option.  

The ALM has: Version Control: this is intuitive and follows a local checkout version and commit back to a main branch (simple version control) and allows to use branches so comprehensive and flexible.  It is a good idea to check in small and often or you run the risk of large complex competing merges.  I believe it is git but from the Mendix Studio IDE it is seamless.  

Build a local Version using the Mendix Studio Pro, and deploy to the cloud.  There are several options including on-prem. the free version is basic, and has limitations but has proven to be extremely powerful.

Mendix supports sprints, boards, so you can work with User Stories in the Developer Portal for ALM.

An App Package can be stored and it is a good idea to use this as the base for all projects in your company, so basic branding and naming conventions are consistent.

Deployment anywhere such as on-prem. via Kubernetes deployment, as well as the major cloud platforms i.e. AWS, Azure, GCP, Oracle.

Market Place - templates, connectors, components to reuse. 

Domain Modelling is excellent, can chose your database when creating app, modelling is easy and exposing via OpenAPI contract and generating CRUD screens is easy.

Publishing to cloud production versions is very easy and the local version as developing is seen on localhost.

Image 1, High level overview of the logical components making up Mendix.

Pros:

  1. Easy to use.
  2. Basics for Low code are all included such as version control, project management, deployment/publishing.
  3. Build native mobile apps.
  4. Improve business process easily.
  5. Supporting multiple languages is unbelievably simple and easy.

Image 2. Add multiple Languages to your app

Simple exercise: Call an key secured API and display on a Mendix page after watching this 7 minute video on API Calls.

The running example has:

  1. Various pages and forms for showing and persisting database information. 
  2. A REST Call to a 3rd party using OAuth key.  
  3. Publishing a REST API based on a table and an associated entity.
  4. Displays an Azure Chatbot

Me playing around with a Mendix App:

1. Get a REST endpoint and verify using postman (using a key for secure access)

Image3. Postman showing the REST call to be used

2. Create a new "microflow" as shown below:

3. Add a new "Action" of type "REST Call"
4. Add a JSON Structure file


5. Decide which attributes to pull out

6. Create an "Entity" in the Domain model to hold the retrieved data.
7. Map Model to the Import as shown below

...


Mendix Series

1.  Overview of Mendix (this post)

2. Mendix - Part 2 - Diving deeper

Tuesday 10 October 2023

Dynamics & Power Platform browser extensions and tools

Key Tools and Browser Extensions for Dynamics and  Power platform Developers:

  • Level up for Dynamics (extension)
  • Dynamics 365 Power Pane (extension)
  • Microsoft power automate Desktop (extension)

Thursday 14 September 2023

Microsoft Azure Artificial Intelligence (AI) - AZ-900 Notes

1. Artificial Intelligence (AI) 

  • AI making PC behave like human intelligence.  
  • Teach PC to do task for us.  
  • PC predicts using patterns and can act.  And good at looking for anomalies.
  • PC uses camera/photos to look for patterns.
  • Engage in useful conversations, use multiple sources of knowledge.

2. Machine Learning (ML)

  • Train PC's to see patterns and see patterns, and look for anomalies.
  • Example. predict stock prices by looking at factors that affect stock price.
  • Anomaly Detection - Detects unusual patterns e.g. CC used in Asia when normally in Europe, but transactions 10 min apart.  Therefore likely to be fraudulent.  Sort rubbish.
  • Predictive models by finding relationships.  Give model data and train the model.  
  • Example: flowers have features/characteristics e.g. colour, size, no petals, ...
  • Using data to teach machine
  • Supervise ML - need quality data including labels.  Avg humidity, hrs sunshine, rainfall, temp, month of year (features), ice creams sold(label/class), so we feed in temp is Regression ML.  Patient has features (weight, sex, age, bmi,...) give value btwn 0-1 of the person developing diabetes.  is Classification ML
  • Unsupervised ML - data is not labelled.  Just feature provided, will group into clusters.  Pulls data out and figures out it's own criteria is Clustering ML.  Useful for fraud detection.
  • Training - good data based on a training set and a validation set.  Train model, with most data, then check with remaining - allows us to see how close to what happened.  Service tries to figure out relationships.  Model is used by test data - see how close/useful model is.

3. Compute Vision 

  • Self driving cars, sorting. Sort rubbish.
  • Facial recognition, object recognition,..
  • How do computers see?  picture is cut up into pixels, data is pulled and used to find possible ans. 
  • Some types on Azure:  object detection i.e. car, bike, car, bus.  Image classification i.e horse, car.  Semantic segmentation i.e. Teams blur background.   Image analysis contect by bring various tougher.  Face detection.  OCR - read image and converts to text.

4. Natural Language Processing (NLP) 

  •     interpret e.g grammerly, spam check, Alexia, 
  •     Knowledge Mining - Extract info from knowledge and gain insights e.g. social media marketing.

Principals:

  1. Fairness - ensure bias is excluded e.g based on gender.
  2. Reliable & Safety - need high confidence and in certain systems cannot fail e.g. health systems, autonomous cars.
  3. Privacy & Security - Ensure data is protected and not giving away sensitive data.
  4. Inclusiveness - should be fair i.e. VI users
  5. Transparency - what is the model based on, what could be an issue.
  6. Accountability - who is liable for AI decisions
Azure:
  1. Scalability & Reliability
  2. AI Resources: sit in an Azure Resource Group

AI Services in Azure:
  1. Azure Machine Learning - Developers to train, test and deploy ML models.  Within a subscription, create a Azure ML Workspace (consists of: compute, data, jobs, models) can then publish as a service.  Azure ML Designer, used for creating ML pipeline, data in to train model.  Automated Machine Learning user only needs to provide the data and select the model to use, service figures it out. 
  2. Cognitive Services - vision, speak, language, decision.  Rest API endpoints - have already been trained, choose the model.  Can deploy multiple parts individually or together.

    3. Azure Bot Service - develop & managing intelligent bots like chat-bots
    4. Azure Cognitive Search - Data extraction, & enrichment for indexing.  Makes data searchable.

Anomaly detector resource - wizard to setup - Add Keys and endpoints to allow access.

Create a new Azure Machine Learning Service, will create a Workspace.  Use multiple azure services such as key-vault, AI, storage accounts.   
  • Launch Studio
  • Add Compute Cluster
  • Add Data (csv, spreadsheets, nearly any form,...)
  • AutomateML (figure it out without me) and run job
  • Will show trends
  • Deploy the model (i.e. to a web service)
  • Shows "Endpoints" - get url and a test rig.

Friday 8 September 2023

Notes for running Agile Power Platform Projects including DevOps

Overview:  General overview notes on setting up Power Platform projects/programs.  Before I get into the mechanics, my overriding goal is to have high functioning teams, and be a member of high functioning teams.  "Create an environment where team members can do there best work".  for instance, I visit and work with a lot of businesses and I many teams that are in an "Artificial Harmony" state (pretend all is well with the world). Everyone says it's wonderful but it's a snake-pit with relationships and fear.  Teams members need to be happy, open to conversations, accept risk and aware mistakes are going to happen.  Basically, these teams need to be identified and trust build, this often involves an adjustment to a particular mid-level manager.  The worst offenders tend to be offshore teams, and there are amazing teams and people so this definitely is a generalization. The teams tend to be hierarchical as opposed to flat or matrix. it's terrible for software projects.  Look out for Technical leads, ISV Project managers, Deliver Leads, they can breed the wrong tone/attitude across multiple team members and teams.  Check out Amy Edmondson's book the Right kind of wrong: the Science of failing well.  Anyway, rant over.   

Learn from mistakes, simple mistakes, remove them ASAP, strategically think automation, if we learn mistakes ensure they don't happen on the next project or sooner.  Encourage transparency, and open communication.


Here are my notes for setting up ADO and guidance for Agile PP projects....

Agile Artifact Hierarchy:

Epics > User Stories (max 5 days work) > Tasks 

                                                                  > Bugs

Epics > Spike

Guide:

  • User Stories mush be written in the format: As the <role> , I want to <feature> so that <benefit>., and have 1 to many Acceptance Criteria using Gherkin 
  • Ability to add a release note to User Story, Spike, Task or Bug.
  • Automate pipelines from unmanaged to UAT Managed.
  • Min three env (Dev-unmanaged), and UAT/Prod-both managed), use ADO pipelines or Power platform pipelines
  • Adding annotated images is great for improved communication.  Recorded voice narrated mp4 walk thrus are also great for proofs, and explaining issues.
  • US, bug, Task, Spike artifact items, each has a Release note tab.  So if a User Story needs more than 1 solution package changed, use child tasks and add the release notes to the User Story.

Flow of bugs and User Stories:

  1. New (Anyone)
  2. Approved (Product Owner (PO)) 
  3. Ready - (PO)
  4. Committed - (Team Member/dev)
  5. Dev - In Progress (dev)
  6. Dev - Complete (dev)
  7. Dev - Show QA (dev & QA)
  8. UAT - Ready to Deploy in UAT (dev)
  9. UAT - Deployed Ready for Testing (QA)
  10. UAT - In Tested Manaual (QA & PO)
  11. UAT - Complete Ready for Deployment to Prd (QA)
  12. PRD - Deployed
  13. PRD - Sanity Check (can include automate smoke testing)
  14. PRD - Done
Example Status's
Other states:

  • Removed
  • Duplicate
  • Not Applicable

Release Notes for Power Platform packages need to include the following fields in ADO against artefacts:

  1. Package Name (dropdownlist), 
  2. Current Package Version, 
  3. New Package Version (Default TBC), 
  4. Change Note,  
  5. Deployed Status (dropdown list: NA, UAT, PRD), 
  6. Pre deployment steps, 
  7. Post Deployment steps
Example of ADO Release Notes assigned against tasks, bugs, and user stories.

Quick fixes/urgent bugs/Emergency changes:

  • Try make release cycles as short as possible, and only do emergency changes if absolutely required.
  • Take a snapshot copy of dev/label, for each proposed production deployment - unmanaged env - part of ADO pipeline, this allows us to build Dev, and UAT env for the specific emergency change.
  • Take a snapshot of UAT - managed env - part of ADO pipeline
  • Deploy to PRD from Emergency UAT.
  • Developer integrates emergency change into Dev from the Dev Copy.  And follows the full path.
Team/Teams:
  • Try keep teams as small as possible.  I prefer 1 team to multiple scrum teams unless their is a clear distinction/break.
  • Product Owner (PO) needs to be available all the time and answer immediately.  To me they act as the business and the traditional BA role, and are responsible for the product backlog.
  • Scrum masters.  Your job is to ensure the team members are happy and confident to take risks and work, Scrum ceremonies are merely a way to help out.
  • Team members are mainly pro and citizen developers, if I use dedicated QA testers in the scrum team, they need to be responsible for the AC with the PO.  They tend to be analyst/developers.
  • Automate, automate, automate.  there are fantastic tools including low code test tools, use them.  Ensure you have automate smoke tests, regression tests, and performance tests for each DTAP env.
  • Have short coding standards and naming conventions, error handling patterns and enforce them, have a defined ADO process, have a pipeline for deployments, automate tests and continuously update.  Have a Monitoring strategy i.e. Azure Monitor, log via AppInsights on a Log Analytics workspace.  Each env logs to it's own Azure Log Analytics.  Does each Log analytics belong to their own workspace?  I pref Non-prod, and prod workspaces. 
  • Teams/Slack, okay just Microsoft Teams, remote work make happier team members and gives people more time, use it.  But encourage camera to be on, email is not a defence (ever), people must IM/chat/ping and call each other.  
  • Encourage meeting up, join with social inclusive events at once a month to once a week.  Encourage people to work together online including peer programming.  

Thursday 7 September 2023

Extend Power Automate Logging

  1. Power Automate has a Connector to query other Power Automate environments to list, update flows,...
  2. PowerShell to examine Flow/Power automate

https://www.cloudsecuritea.com/2019/09/generate-an-overview-of-all-microsoft-flows-with-powershell/

Use postman to Interact with an API - get the bearer token first.