Saturday 18 March 2023

Canvas Apps Workflow logging linkage

Overview:  Power Automate has good monitoring and analysis within the product, and Canvas apps use instrumentation to App Insights and allow for custom tracing.  The issue is linking Canvas app logs to a called workflow.  In this video (2min), I discuss linking Traces in Azure App Insights with flows run on Power Automate.

By passing the the workflow runs workflow back to the calling canvas app, a deep link will allow support to follow a users activity including drilling into the flows being called.



Add additional logging to Azure Log analytics to a custom table within your flows:


Friday 17 March 2023

Why is Postman so fantastic?

Overview: lots of IT technical people user Postman for API creation, exploration, testing.  There is so much more to the product than most developers are aware of.  Initially Postman was for developers to explore and test API's, basically a test rig for API's.  Postman built a Minimum Lovable Product (MLP) initially, they have added multiple features over the years and they are so useful.  Most users tend to use the most basic features but could use additional functionality

List of Features I like:

Monitor - Postman has a Monitors option that is great for continuous monitoring, you can link to your collection and run them on a schedule. I like to take a small key set of API's to run every 5 minutes using Monitor to schedule my collection runs (from Postman cloud), this provides me with: Are the APIs running and is their performance decreasing.  The monitoring Dashboards are fantastic, and alerting allows for webhooks or email.  In this post, I monitor API's with OAuth security and send alerts into Microsoft Teams using email on the Teams channels. 

Postman API Builder - Allows me to build OpenAPI contracts and mock the API to allow contract first/API-First  design (UI and backend development can be done independently.  I tend to use Swagger tooling and APIM to mock to do this but I'm very tempted to do use Postman Mock Servers

Postman CLI - This allows me to run collections on my local machine or from a server.  In a post I cover using the Postman CLI to run a postman collections using PowerShell, adding a shortcut to quickly verify and API is running, and I added Elgato Stream Deck so I can click a button and it will run my collection on my laptop. 

More Features: Environments, Tests and collecting responses into variables, Collections, Authentication Reuse, Workspaces, Loading test file data, source control/Git, Pipeline testing Integration,

Tuesday 14 March 2023

Power Platform Logging, Monitoring, and Alerting

This post relates to a previous strategy blog post, read that first https://www.pbeck.co.uk/2023/02/setting-up-azure-application-insights.html

Overview:  Microsoft uses Azure Application Insights to natively monitor Power Apps using an instrumentation key at the app level.  To log for model driven apps and Dataverse this is a Power Platform config at the environment level e.g. UAT, Prod.

When setting up Application Insights, use the Log Analytics workspace approach and not the "Classic" option as this is being deprecated.

Power Apps (Canvas Apps): Always add the instrumentation key to all canvas apps, it is set in the "App" level within each canvas app.  Deploy solutions brings challenges with changing keys for app insights logging (unmanaged layers).

"Enable correlation tracing" feature imo. should always be turned on, it is still an experimental feature but with it off, the logging is based on a sessionid

"Pass errors to Azure Application Insights" is also an experimental feature.  Consider turning it on.

Canvas Apps have "Monitor", Model driven apps also have this ability to monitor, and Power automate has it's own monitoring

Log to App Insights (put app insights on Azure Log analytics), simple example with customDimensions/record.

Trace("My PB app ... TaxAPI.NinoSearch Error - Search - btnABC",
        TraceSeverity.Error, // use the appropriate tracing level
        {
            myappName: $"PB App: {gblTheme.AppName}",
            myappError: FirstError.Message,  // optional
            myappEnvironment: gblEnv,
            myappErrorCode: 10010,
            myappCorrelationId: GUID() // unique correlationId
        }
    );
Query the logs using kusto:
traces
| extend errCode = tostring(customDimensions["myappErrorCode"]), err = tostring(customDimensions["myappError"])
| where errCode == "100100"

Coming June 2023

Push cloud flow execution data into Application Insights | Microsoft Learn

Allows logging to tie Flows back to the calling Canvas app.  You can now do this manually but it has to be applied at all calls to or after the flow.

Below is a basic checklist of decisions to ensure you have suitable logging

Logging Checklist:

  1. Setup Azure Log Analytics (1 per DTAP env e.g. uat, prd)
  2. Get the workspace key needed for logging to Log analytics "Agents" > "Log Analytics agent instructions", copy the Workspace Id and the Secondary Key
  3. Create an Azure Application Insights per DTAP
  4. Each Canvas app needs an instrumentation key (check, have you aligned DTAP log instances with the Canvas App DTAP)
  5. Power Automate has great monitoring, but it is a good idea to setup logging for Dataverse (which shall cover model apps), done thru Power Platform Admin Studio > Environment
  6. Enable Logging Preview Feature for Canvas apps & check the power automate push cloud execution feature state.
  7. Do you have logging patterns in you Canvas app for errors, do you add tracing, and is it applied consistently?
  8. Do you have a Pattern for Power Automate runs from Canvas apps?  I like to log if the workflow errors after the call.
  9. Do you have a Pattern for Custom Connectors?
  10. Do you correlation trace Custom API (internal and 3rd party)? 
  11. Do you have a Try, Catch, Finally scope/pattern for Workflows.  How do you write to the logs, most common is to use an Azure Function with the C# SDK.  I like to use the Azure Log Analytics Connector in my catch scope to push error info into the workspace log using a custom table.
  12. Ensure all Azure Services have instrumentation keys. Common examples are Azure Functions, Azure Service Bus, API Manager, the list goes on...
  13. Do you implement custom APIM monitoring configuration?
  14. Do you use the SDK in your code (Functions etc.)?
  15. Setup Availability tests - super useful for 3rd party API's.

Once you have the logs captured and traceable (Monitor & Alerting Checklist):

  1. Create Queries to help find data
  2. Create monitoring dashboard using the data
  3. Use OOTB monitoring for Azure and the platform
  4. Consider linking/embedding to other monitors i.e. Power Automate, DevOps, Postman Monitor
  5. Setup alerting within the Azure Log Workspace using groups, don't over email.  For information alerts, send to Slack or Teams (very simple to setup a webhook or incoming email on a channel to monitor)
  6. Power Automate has connectors for adaptive cards channel messaging, consider using directly in Flows or from alerts, push the data into a flow that will log the alert using an adaptive card right into the monitoring channel.

Saturday 4 March 2023

How to check if any existing Model Apps use one of the MS deprecated Controls

Problem: Microsoft has listed 6 model app controls that shall no longer work from April 2024.  So how do I check if any of my existing model apps use the offending controls.

"Effective January 2023, the following controls for model-driven apps are deprecated: auto-complete, input mask, multimedia player, number input, option set, and star rating.

Why is this needed?

We will be introducing new Fluent UI controls that have better usability, accessibility, and dark mode support.

Impact

  • Starting April 2023, these controls can no longer be added to forms.
  • Existing control instances will work on existing forms until April 2024.

Action required by you

Evaluate existing forms that include a deprecated control and replace them with a newer control."  Source

Hypothesis:  I can go into the forms but I have a lot of forms, on multiple tables that have been customised.  I'd like to speed up the checking process to I'm going to add all model driven apps to a solution (default solution will get too big), and from there I'll extract out the unmanaged solution files and merely run a search for the six offending control names.

Proposed Solution:

https://youtu.be/F4LLF-y6RUo

1. Unpack your unmanged solution using the Power Platform CLI (pac cmd) onto your filesystem

2. Go to the folder containing all the files and do a search for the offending controls:

MscrmControls.NumberInput.NumberInputControl

MscrmControls.OptionSet.OptionSetControl

MscrmControls.Rating.StarRatingControl

Thought: The other 3 controls can no longer be added so as of March 2023, I don't know there names, I played with "mediaplayer" and the others and it looks like I don't have them in any of my solutions.

More Info:

It was way easier for me for a clients whole platform as the DevOps pipelines extract the unmanaged solutions into GitHub as part of each solution deployment.  I merely cloned the latest code base that included all Power platform solutions, and performed the searches to identify issues.  Modern model apps hardly had any instances of the offending controls but the older stuff needs more work to be done to remediate.

Performing the windows file search clearly shows if you have the problem:


i.e.

<Required schemaName="MscrmControls.NumberInput.NumberInputControl"..

<Required schemaName="MscrmControls.OptionSet.OptionSetControl"..

Monday 27 February 2023

Emergency fixes into a controlled production Power Platform environment

Power Platform is easy to push changes thru environments using managed solutions.   This is a simple way to allow development to continue but deploy an emergency fix quickly and then get it into the "main branch".

Deploying a hot fix into production within a DTAP regulated environment

If you build the Power Apps environment dynamically, then hot fixes are easy.

There are still open questions around, the duration backups are held for and taking backups and restoring to new sandbox environments to be answered.

The Power Platform ALM Microsoft Knowledge base is very good.

Friday 24 February 2023

Environment variables for Power Automate

Overview: Environment variables are great in Power Platform but when using managed solutions and ALM there are a couple of points that will make your deployment and support easier.

Background:  Power Platform has Environment Variables (env vars) that are sored in two entities within the Dataverse namely: definitions and values.

We deploy env vars thru solutions and can easily amend with adding an unmanaged layer.

Problem:  In you managed environment you end up with a tangle of env vars that make upgrading solutions fail.  in a nutshell, deleting unmanaged layers using the UI, only clears the value of the env var variable part. And not the definition part. The unmanaged later env var is made up of two parts and stored in the 2 environment entities in the Dataverse.  Both must be removed.

It makes sense as in UAT we have env vars initially setup in a solutions, then we have unmanaged layers added when we amended the values, then later we use a different solution to deploy the latest env var from a different solution.

What I have seen is env being deployed as part of a 1 solution for all artefacts approach, as the project grows more solutions are added for packaging and each of these solutions has a few more env variables.  Eventually as you use the env vars across multiple Canvas apps, Power automate instances, you build a dedicate solution for env vars.

Best Practice:  Well what I recommend at least

  1. Put all variables into a single solution from the start, it means it is easy to deploy them quickly across all your DTAP environments e.g. uat, Prod.
  2. In the unmanaged solution ensure the env variables do not have a "current value".  Set the "default value" in Dev.
  3. Run settings files to fill the "Current value" in each DTAP env, have the current values for each env in a single settings file that will push the settings via the pipeline i.e. setting-uat.json, setting-prd.json
Tip: If you need to change any value merely rerun the solution containing the env vars, don't ever use an unmanaged layer to change env vars.

Tip: It's better to build your Power Platform environment in DevOps pipelines but if you use existing environments and merely push solutions on top (much more common), then clean up your existing vars as outlined below.
  1. Delete both the definition and variable parts (manual) until the environment UAT, PRD has no custom env vars. - use the Dataverse meta api
  2. Run the single env var solution.
  3. Never add unmanaged layers to env variables, if you need it changes, change the solution package and deploy, should take minutes to do. 

Sunday 19 February 2023

Setting up Azure Application Insights for Monitoring Power Platform Canvas Apps

Overview: We are building key applications in Power Apps.  It is essential that the appropriate level of monitoring, alerting, and ability to trace is setup.  The diagram below provides an overview of likely solutions.  

The top half of the diagram should call out the client components used in the application, you need to add instrumentation keys to ensure the logging is done to the correct DTAP environment instance, i.e., production components used in the solution must point to the production instance of Azure Application Insights.  The diagram only deals with Production.  I prefer to point the lower env to a non-pro instance.

The bottom half discuss decisions that are key to make the monitoring and alerting successful.  It should aim to:
  1. Ensure errors are detected, 
  2. Can be traced,
  3. System is healthy, 
  4. Are any components down,
  5. Performance is stable or not,
  6. Warn me before the system goes down,
  7. Alerting is setup (don't over alert and ensure it is focused to the right people), and
  8. Validate deployments.     
I turn on the experimental features:

A gotcha to App Insights in Canvas apps applies to Managed solutions, if you add an app insights instrumentation key, or leave it blank, there is no easy way to override the value.  You can add an unmanaged layer but the issue is when you next deploy the app only updates the the new version once the unmanaged layer is removed and you will need then then manually add the Unmanaged layer after each deployment with the appropriate app insights instrumentation key.  There are workarounds with extracting the solutions and amending the setting, then repackaging using the Power Apps CLI but it has issues.


Other thoughts:
It is a good idea to use the App Insight SDK's to trace key info within each service
Power Automate should use the try catch function pattern for logging.  I log to Azure Log Analytics using the built in Power Apps connector.