Monday 27 February 2023

Emergency fixes into a controlled production Power Platform environment

Power Platform is easy to push changes thru environments using managed solutions.   This is a simple way to allow development to continue but deploy an emergency fix quickly and then get it into the "main branch".

Deploying a hot fix into production within a DTAP regulated environment

If you build the Power Apps environment dynamically, then hot fixes are easy.

There are still open questions around, the duration backups are held for and taking backups and restoring to new sandbox environments to be answered.

The Power Platform ALM Microsoft Knowledge base is very good.

Friday 24 February 2023

Environment variables for Power Automate

Overview: Environment variables are great in Power Platform but when using managed solutions and ALM there are a couple of points that will make your deployment and support easier.

Background:  Power Platform has Environment Variables (env vars) that are sored in two entities within the Dataverse namely: definitions and values.

We deploy env vars thru solutions and can easily amend with adding an unmanaged layer.

Problem:  In you managed environment you end up with a tangle of env vars that make upgrading solutions fail.  in a nutshell, deleting unmanaged layers using the UI, only clears the value of the env var variable part. And not the definition part. The unmanaged later env var is made up of two parts and stored in the 2 environment entities in the Dataverse.  Both must be removed.

It makes sense as in UAT we have env vars initially setup in a solutions, then we have unmanaged layers added when we amended the values, then later we use a different solution to deploy the latest env var from a different solution.

What I have seen is env being deployed as part of a 1 solution for all artefacts approach, as the project grows more solutions are added for packaging and each of these solutions has a few more env variables.  Eventually as you use the env vars across multiple Canvas apps, Power automate instances, you build a dedicate solution for env vars.

Best Practice:  Well what I recommend at least

  1. Put all variables into a single solution from the start, it means it is easy to deploy them quickly across all your DTAP environments e.g. uat, Prod.
  2. In the unmanaged solution ensure the env variables do not have a "current value".  Set the "default value" in Dev.
  3. Run settings files to fill the "Current value" in each DTAP env, have the current values for each env in a single settings file that will push the settings via the pipeline i.e. setting-uat.json, setting-prd.json
Tip: If you need to change any value merely rerun the solution containing the env vars, don't ever use an unmanaged layer to change env vars.

Tip: It's better to build your Power Platform environment in DevOps pipelines but if you use existing environments and merely push solutions on top (much more common), then clean up your existing vars as outlined below.
  1. Delete both the definition and variable parts (manual) until the environment UAT, PRD has no custom env vars. - use the Dataverse meta api
  2. Run the single env var solution.
  3. Never add unmanaged layers to env variables, if you need it changes, change the solution package and deploy, should take minutes to do. 

Sunday 19 February 2023

Setting up Azure Application Insights for Monitoring Power Platform Canvas Apps

Overview: We are building key applications in Power Apps.  It is essential that the appropriate level of monitoring, alerting, and ability to trace is setup.  The diagram below provides an overview of likely solutions.  

The top half of the diagram should call out the client components used in the application, you need to add instrumentation keys to ensure the logging is done to the correct DTAP environment instance, i.e., production components used in the solution must point to the production instance of Azure Application Insights.  The diagram only deals with Production.  I prefer to point the lower env to a non-pro instance.

The bottom half discuss decisions that are key to make the monitoring and alerting successful.  It should aim to:
  1. Ensure errors are detected, 
  2. Can be traced,
  3. System is healthy, 
  4. Are any components down,
  5. Performance is stable or not,
  6. Warn me before the system goes down,
  7. Alerting is setup (don't over alert and ensure it is focused to the right people), and
  8. Validate deployments.     
I turn on the experimental features:

A gotcha to App Insights in Canvas apps applies to Managed solutions, if you add an app insights instrumentation key, or leave it blank, there is no easy way to override the value.  You can add an unmanaged layer but the issue is when you next deploy the app only updates the the new version once the unmanaged layer is removed and you will need then then manually add the Unmanaged layer after each deployment with the appropriate app insights instrumentation key.  There are workarounds with extracting the solutions and amending the setting, then repackaging using the Power Apps CLI but it has issues.


Other thoughts:
It is a good idea to use the App Insight SDK's to trace key info within each service
Power Automate should use the try catch function pattern for logging.  I log to Azure Log Analytics using the built in Power Apps connector.

Friday 17 February 2023

Postman Monitor for Continuous Monitoring and Alerting in MS Teams

Overview: Pretty much every tester and developer loves postman. And that is because it makes our lives easier and it just plain awesome.  Postman is bringing out tons of new features and I was playing around today looking how I could do continuous monitoring with my postman collections.

Thoughts & Playing:

I have a postman collection that runs 8 requests and does 14 asserts.  The first request gets a new OAuth token using AAD login.  Then I do a series of requests and I do an assert to ensure I am getting a 200 response code and that the response time is less than 3 seconds on each call.  I can run the collection locally.  Level 100 API verification looks good.

In the past, I have taken this collection and run it as a shortcut on my desktop using Powershell with the Postman CLI to display me the results.  Makes my life easier.

I then added Elgato stream deck so I can run the monitor with a single button push (more me playing than real value).  I'd say I'm at level 200 in continuous monitoring capability.

Next, I setup a monitor on the collection, and this allows me to login and view the dashboard and trace, great stuff, and I get an email if anything goes wrong as an alert.  So now I'm getting serious about monitoring and alerting on my API's.  Level 300 is approaching.




Postman monitoring has integration for MS Teams, and Slack.  It also can send logs to Data Dog and New Relic but now Application Insights (recon this will come soon).  I setup a channel in teams to have a webhook, and I can send in the results using Postman but it's way easier to use the integration on the Monitor to push the result of each run or automatically after 3 failures.

Summary:  This Postman monitoring allows me to send detailed API requests at different intervals so I'm thinking for production: 
  • 5 min for health and basic check (look for performance and service slowdown or failure; add alerts but don't over alert so use teams except if service breaks then Teams groups),
  • Hourly, check key functionality/API's including CRUD operations and clean up (ensure the service is operating for most key endpoints), and
  • Daily, in the early hours run a full regression API set of tests, and clean up afterwards (Support/help desk need to review each day).
Don't over alert, let me say that again don't over alert.  Alerting is like water, you definitely a little but floods are not great.  So with Teams & Slack, it's easy to push results and issues into a channel so key people are aware, and it gives a much better experience than email alerting.

I like the idea of using Postman as it's infrastructure is separate as I generally use the Azure/MS stack including Application insights. 

What Next:  I'd like to figure out how to push results into my logs for reporting off a single source.  I could embed the postman monitoring into iFrames but I'd probably use an Azure logic apps Azure function to listen for the Postman POST, then I can format adaptive cards for Teams, and outlook, easily integrate Twilio for SMS or maybe What's app.  From the logic app i can use a Application Insights SDK to add Tracing.  

Combining with Correlation Id's and App Insights, I can see issues, have them summarised, get the right level of alerting, trace specific issues quickly.  Ideally we capture issues before customers report them. and if a customer reports and issue it can be 100% traced, remediated and fixed for all customers quickly.  Changes to API's and compatibility is also a nice benefit of this approach.

  


Sunday 12 February 2023

Adding Adaptive Card messages into Teams using Postman

 WIP: I've wanted to play with adaptive cards for awhile, but this post is about using the WebHooks that teams can expose to that adaptive cards and be pushed into a Channel.

1. Configure a teams channel to support incoming webHooks

2. Run a postman POST request to push the card into the MS Teams channel


Tip: Ensure you add the header to the postman request:




Saturday 11 February 2023

Audit log retention in Dataverse

Overview: Audit data log retention is now fairly easy to implement in Dataverse, you can set whatever is audited and set the for how long duration easily.

Thoughts: As a simple version, I'd audit all changes into the Dataverse and set the retention to 7 years.  Now this could end up costing you a considerable amount of money so consider, do I need to audit everything, do I need to retain this long, can a use a long term storage retention approach.  There are a variety reasons for customising Datavervse data retention including: to comply with laws and potentially the need for litigation, to comply with industry standards/certification, and to keep a full history to understand why we have the current data position.
  
Ultimately, I need to identify/understand how to store audit history, clean up when no longer needed, ensure it is no affecting you live system performance, and can be retrieved by authorised people in the timeline required for each project or at an enterprise level.

If a system changes a lot and uses blobs, the audit history will be large and Dataverse is not necessarily the best place to store long term audit history.

Technical: Dataverse stores data in an Audit entity (table), the infrastructure has been changed in late 2022 to handle the audit data separately to allow for better non-functional requirements to available.

Sunday 5 February 2023

Wednesday 1 February 2023

SaaS Product surrounding services

Onboarding - Ability to allow customers to sign up to trials, sign-up as a customer, payments and billing.

Sales Channels - Ability to bring on new customers from various channels like other websites, digital adverts, telephone,... Can be from very simple to supper complicated.

Service Page - All clients to see the current status of your SaaS products e.g. 

https://portal.office.com/servicestatus

https://status.quickbooks.intuit.com

A lot of incident management software offers white labelled status page

SaaS solutions include: 

statuscast.com

statushub.com