Wednesday 26 April 2023

Uploading files to a Dataverse table using the "Add picture" control in a Canvas App

Overview: I need to be able to upload pictures or an file for that matter and persist the file in a Dataverse entity.  This took me a little longer than it should have.

Notes: I added a "Add picture" control to the screen, I also added a label and a icon ro Clear the uploaded image.

I also added a save button, here I persist to a Dataverse table named "Evidence", The Evidence table has a column called "File" of type "File". The part that took me awhile in the Patch statement was getting the file into the correct format (as it is a record not a picture).  This works for any file upload but has the drawback of needing to change the lookup to "All file types".  

Saturday 22 April 2023

Microsoft's Well-Architected

Overview: Goal is to make using Azure and your IT function operates as optimally as possible through performance, scale-abity, minimise costs, reliability, optimise devOps, test failures, geo-data sovereignty, app security, Auth security.  It is to be done consciously: 1) Collect/Gather (Well-Architected Review) > 2) Analyse > 3) Advise (build plan) > 4) Implemented  

Five Pillars:

  1. Security;
  2. Performance Efficiency;
  3. Reliability;
  4. Cost Optimisation;
  5. Operational Excellence;

Tons of Tools are part of the Well-Architected framework:

  • Azure Advisor, analyse workload - gives possible recommendations/improvements - can import into the Azure Well-Architected Review Tool.
  • Azure Well-Architected Review Tool (amazing) - Answer qus and input Azure Advisor recommendations. Pick an area, and pick one or more of the five pillars, then work thru the the Azure Well-Architected Review Tool, to get a milestone and then look to implement iteratively.

  • Well-architected checklist
  • Provides templates to complete actions e.g., RPO/RTO, security threat analysis, treat modelling (STRIDE is MS same as DREAD)

1. Security Pillar (Protect, Detect and respond to threats)

Tools: Monitor from Azure Security Centre (ACS) NB!, Azure Defender and Microsoft 365 Defence make up Azure Sentinel (SIEM), can stream SIEM from On-prem..

2. Performance Efficiency Pillar
Trade off of cost with reliability, scale and performance.  Chaos testing - test breaking/removing resources to mimic problems.  Monitor performance resources for reliance, and performance.  Do you want to dynamically scale, react to performance or increase load to increase/scale the services.  Cache (Redis) and in as many layers as possible.  Multiple regions/zones for services close to users (paired regions are a good idea), zones, region reliance can affect performance.  Health model in effect is ensure you have monitors and alerts to verify you systems heath think Azure Dashboards or Grafana.

3. Reliability Pillar
High Availability (HA) & Resilience of Azure Resources

4. Cost Optimisation Pillar
  • Understand cost (choose the right service e.g. CosmosDB can be cheaper than SQL or vis versa)
  • Optimise (remove orphaned resources, reservation vs PAYG (licence optimisation, scale consumption when needed/optimise instances), be pragmatic in cost to benefit/cost trade-offs).  Cost modelling to understand what the cost is likely to be going forward.  Good RTP/RTO and multi-geo is expensive but if you need it.  Design choices affect the cost.  Optimise data transfers, auto-scalability (vertical and horizontal both expansion and reduction of resources). Use Azure Cost Management Tool.  Automating provisioning helps with cost as the correct resource provisioning is implemented.  Bicep is stateful whereas ARM is stateless.
  • Control costs going forward (Review periodically/constantly, use alerts  to monitor usage).  Monitoring your resource usage, can it be reduced.  
5. Operation Excellence Pillar

Dr. Kai Dupé presented the Well-Architected Framework on behalf of Microsoft 20 April & 21 April 2023 where I took notes to build this post.

Monday 10 April 2023

Postman automation reminders

Also see "Postman to check Open API's are Running"

Fire Postman collections on demand using curl

A monitor is already setup: I need the postman monitor id and an API key

Run local postman collection using Newman via Powershell (call from CI pipelines or a short-cut on the desktop)




Thursday 6 April 2023

Runas on Flows

Overview: If I use a connection in a Canvas App, the signed in user uses their own permissions and the connector as as the signed in user.  

Problem: I wish to run a flow as a specific user and not the users calling the flow from the Canvas app.

Hypothesis: I wish to call logging connector into Log Analytics, so I have created a flow, If I use the Power Apps V2 connector, it offers and option to run in another users context.

Resolution: Open the Workflow, ensure you are using the Power Apps V2 trigger, then...


Here I use the Scopes to perform a Try Catch finally set of logic


Tip:  most people tend to use a custom connector to push the error message into a function from the Workflow, the function app uses the App Insights SDK and logs the workflow error.

Simple C# code to write to App Insights using the SDK. 

#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
int eventId = data?.eventId;
string errType = data?.errType;
string errMsg = data?.errMsg;
string correlationId = data?.correlationId;
string workflowId = data?.workflowId;
string workflowUrl = data?.workflowUrl;
string flowDisplayName = data?.flowDisplayName;
var custProps = new Dictionary<string, object>()
{
{ "CorrelationId", correlationId},
{ "WorkflowId", workflowId},
{ "WorkflowUrl", workflowUrl},
{ "WorkflowDisplayName", flowDisplayName}
};
using (log.BeginScope(custProps))
{
if (errType=="Debug") { log.Log(LogLevel.Debug, eventId, $"{errMsg}"); }
else if (errType=="Trace") { log.Log(LogLevel.Trace, eventId, $"{errMsg}"); }
else { log.LogInformation($"Event is {eventId}, type is {errType}, and msg is {errMsg}");}
};
string responseMessage = $"This HTTP triggered function executed successfully. {errType} - {errMsg}";
return new OkObjectResult(responseMessage);
}

More Info:

Reza Dorrani has a great recording showing running Power Automate flows using elevated/shared accounts.

Saturday 18 March 2023

Canvas Apps Workflow logging linkage

Overview:  Power Automate has good monitoring and analysis within the product, and Canvas apps use instrumentation to App Insights and allow for custom tracing.  The issue is linking Canvas app logs to a called workflow.  In this video (2min), I discuss linking Traces in Azure App Insights with flows run on Power Automate.

By passing the the workflow runs workflow back to the calling canvas app, a deep link will allow support to follow a users activity including drilling into the flows being called.



Add additional logging to Azure Log analytics to a custom table within your flows:


Friday 17 March 2023

Why is Postman so fantastic?

Overview: lots of IT technical people user Postman for API creation, exploration, testing.  There is so much more to the product than most developers are aware of.  Initially Postman was for developers to explore and test API's, basically a test rig for API's.  Postman built a Minimum Lovable Product (MLP) initially, they have added multiple features over the years and they are so useful.  Most users tend to use the most basic features but could use additional functionality

List of Features I like:

Monitor - Postman has a Monitors option that is great for continuous monitoring, you can link to your collection and run them on a schedule. I like to take a small key set of API's to run every 5 minutes using Monitor to schedule my collection runs (from Postman cloud), this provides me with: Are the APIs running and is their performance decreasing.  The monitoring Dashboards are fantastic, and alerting allows for webhooks or email.  In this post, I monitor API's with OAuth security and send alerts into Microsoft Teams using email on the Teams channels. 

Postman API Builder - Allows me to build OpenAPI contracts and mock the API to allow contract first/API-First  design (UI and backend development can be done independently.  I tend to use Swagger tooling and APIM to mock to do this but I'm very tempted to do use Postman Mock Servers

Postman CLI - This allows me to run collections on my local machine or from a server.  In a post I cover using the Postman CLI to run a postman collections using PowerShell, adding a shortcut to quickly verify and API is running, and I added Elgato Stream Deck so I can click a button and it will run my collection on my laptop. 

More Features: Environments, Tests and collecting responses into variables, Collections, Authentication Reuse, Workspaces, Loading test file data, source control/Git, Pipeline testing Integration,

Tuesday 14 March 2023

Power Platform Logging, Monitoring, and Alerting

This post relates to a previous strategy blog post, read that first https://www.pbeck.co.uk/2023/02/setting-up-azure-application-insights.html

Overview:  Microsoft uses Azure Application Insights to natively monitor Power Apps using an instrumentation key at the app level.  To log for model driven apps and Dataverse this is a Power Platform config at the environment level e.g. UAT, Prod.

When setting up Application Insights, use the Log Analytics workspace approach and not the "Classic" option as this is being deprecated.

Power Apps (Canvas Apps): Always add the instrumentation key to all canvas apps, it is set in the "App" level within each canvas app.  Deploy solutions brings challenges with changing keys for app insights logging (unmanaged layers).

"Enable correlation tracing" feature imo. should always be turned on, it is still an experimental feature but with it off, the logging is based on a sessionid

"Pass errors to Azure Application Insights" is also an experimental feature.  Consider turning it on.

Canvas Apps have "Monitor", Model driven apps also have this ability to monitor, and Power automate has it's own monitoring

Log to App Insights (put app insights on Azure Log analytics), simple example with customDimensions/record.

Trace("My PB app ... TaxAPI.NinoSearch Error - Search - btnABC",
        TraceSeverity.Error, // use the appropriate tracing level
        {
            myappName: $"PB App: {gblTheme.AppName}",
            myappError: FirstError.Message,  // optional
            myappEnvironment: gblEnv,
            myappErrorCode: 10010,
            myappCorrelationId: GUID() // unique correlationId
        }
    );
Query the logs using kusto:
traces
| extend errCode = tostring(customDimensions["myappErrorCode"]), err = tostring(customDimensions["myappError"])
| where errCode == "100100"

Coming June 2023

Push cloud flow execution data into Application Insights | Microsoft Learn

Allows logging to tie Flows back to the calling Canvas app.  You can now do this manually but it has to be applied at all calls to or after the flow.

Below is a basic checklist of decisions to ensure you have suitable logging

Logging Checklist:

  1. Setup Azure Log Analytics (1 per DTAP env e.g. uat, prd)
  2. Get the workspace key needed for logging to Log analytics "Agents" > "Log Analytics agent instructions", copy the Workspace Id and the Secondary Key
  3. Create an Azure Application Insights per DTAP
  4. Each Canvas app needs an instrumentation key (check, have you aligned DTAP log instances with the Canvas App DTAP)
  5. Power Automate has great monitoring, but it is a good idea to setup logging for Dataverse (which shall cover model apps), done thru Power Platform Admin Studio > Environment
  6. Enable Logging Preview Feature for Canvas apps & check the power automate push cloud execution feature state.
  7. Do you have logging patterns in you Canvas app for errors, do you add tracing, and is it applied consistently?
  8. Do you have a Pattern for Power Automate runs from Canvas apps?  I like to log if the workflow errors after the call.
  9. Do you have a Pattern for Custom Connectors?
  10. Do you correlation trace Custom API (internal and 3rd party)? 
  11. Do you have a Try, Catch, Finally scope/pattern for Workflows.  How do you write to the logs, most common is to use an Azure Function with the C# SDK.  I like to use the Azure Log Analytics Connector in my catch scope to push error info into the workspace log using a custom table.
  12. Ensure all Azure Services have instrumentation keys. Common examples are Azure Functions, Azure Service Bus, API Manager, the list goes on...
  13. Do you implement custom APIM monitoring configuration?
  14. Do you use the SDK in your code (Functions etc.)?
  15. Setup Availability tests - super useful for 3rd party API's.

Once you have the logs captured and traceable (Monitor & Alerting Checklist):

  1. Create Queries to help find data
  2. Create monitoring dashboard using the data
  3. Use OOTB monitoring for Azure and the platform
  4. Consider linking/embedding to other monitors i.e. Power Automate, DevOps, Postman Monitor
  5. Setup alerting within the Azure Log Workspace using groups, don't over email.  For information alerts, send to Slack or Teams (very simple to setup a webhook or incoming email on a channel to monitor)
  6. Power Automate has connectors for adaptive cards channel messaging, consider using directly in Flows or from alerts, push the data into a flow that will log the alert using an adaptive card right into the monitoring channel.