Showing posts with label logging. Show all posts
Showing posts with label logging. Show all posts

Sunday, 13 April 2025

Mendix - Logging & Tracing

Mendix offers a few integrations with various Application and Performance Monitoring (APM) tools.

There is no integration with Azure Monitor.  The closest I have is log files can be downloaded.

Cloud edition allows you to download the log files.

TBC

Wednesday, 29 May 2024

Mendix tips & thoughts

Mendix Charting/dashboarding options:

  • E-Charts (Community supported) are simple and nice,
  • Anycharts (very common),

Grafana can be integrated, but I haven't tried it with Mendix.

Power BI can show reports using an iFrame widget.  I like this approach.

Module/Widget Support Note: Check whether libraries are supported by Mendix/Platform or Partner.

Community-supported obviously can be changed by the community. Partner-supported is also an option if you have an agreement with the partner or trust them.

Performance Testing Tool options:

A colleague has used JMeter and feels it was not ideal. 

I'm considering using the Microsoft Playwright Testing service and Playwright testing.

Enterprise Logging/SIEM SaaS integrations supported by Mendix:

  1. App Dynamics.
  2. Datadog,
  3. New Relic,
  4. Dynatrace, and
  5. Splunk

Watch out for:

Logging
Each system logs to loc files on the local machine; these can be pushed into the Postgres local instance.  This can result in a massive storage load being added for auditing and logging.  

Monitoring
Logs can be shipped out using backup and restored or by calling the REST Open API.

Maintenance
Mendix builds a database per app per environment, so the recommendation is to use at least 3 per app as you need dev, Test, and Prod. Each instance, by default, uses PostgreSQL (you can only use PostgreSQL if you use the Mendix-provided images deployed on AWS).  

Global Variables
Mendix doesn't have a concept of a global variable at the start or per session. You can load lookup data, which is often held centrally in your enterprise. This can get heavy quickly, but you can copy Open API results into the local PostgreSQL database so it only gets local data.  

Costs
With Mendix, the cost can escalate rather quickly. Reduce cost by scaling back to the most minor instance in dev and test, especially post-go-live. Each app has a separate database, so management and connectivity can become hard to control. 

Performance
Number of Controls & Size of each application.



Monday, 28 August 2023

App Insights for Power Platform - Part 10 - Custom Connector Logging Thoughts

Overview: One of our developers was asking about a log he was struggling to trace, and it took me awhile and a lot of help from the community to truly understand the issue.  My scenario is shown below:


Scenario: I have a Dataverse change triggering a flow, the flow calls a Custom Connector, that in turn calls an Azure Function (that I control).  So the flow fails, and I have used a pattern in the flow to catch the error and log it into Log Analytics.  All good, then I don't see the event where the action calls the function, my function has logging enabled.  I can see I am getting a 401 unauthorized error.  

Initial Hypothesis: The Power Platform use APIM internally to implement Custom Connectors, and there is no access for clients/tenants to see the internal logging/traffic.  Microsoft have provide the ability to use iLogger on the custom connector to log the traffic.  

We have flows that intermittently get a 401, when the flow is manually rerun, the flow works and I can see the traffic coming into the Function.

The failure rate is extremely low and retries almost always fix the issue, and a third try always ensures the transaction goes thru.

Resolution:  Add logging to the custom connector so we can speak with MS support about the issue.  Add alerting to notify support, they can contact the user or chose to rerun the flow. 

Alternative: If I  enable the code, I can override the behaviour and inject C# code to work with the backend, or handle logic such as replacing text,... 

1. In step "4. Code" tab of the custom connector, add the code below:

You can do any C# logic, I'm sending the original request thru and if it doesn't return me a 200, I'm logging it as critical. 

2. Update the connector, go to the next step "5. Test" > "Update Connector" (Tip: follow the steps)

3. Run the "Test operation", open the Response and validate the response body is correct, then open the "Code logs" tab.  If it is blank, re-run the "Update Connector" (irritating but true). 

304 return from the API, which is cached and not a problem, but 400, 500 would be an issue, could also look out for 429s.

Full C# Code:

public class Script : ScriptBase

{

    public override async Task<HttpResponseMessage> ExecuteAsync()

    {       

        this.Context.Request.Method = HttpMethod.Get;

        HttpResponseMessage response = await this.Context.SendAsync(this.Context.Request, this.CancellationToken).ConfigureAwait(continueOnCapturedContext: false);

        Context.Logger.LogTrace("Custom Connector ListBooks called "); 

        if (response.StatusCode == HttpStatusCode.OK)

        {    Context.Logger.LogTrace("Success");      }   

        {    Context.Logger.LogCritical("Critical | " + response);     }

        return response;

    }

}

More Info:

https://learn.microsoft.com/en-us/connectors/custom-connectors/write-code (NB)

https://never-stop-learning.de/logging-in-custom-connector-code/ (NB) The 2nd part of this post on the Alternative , is a rehash of this amazing post - I amended the logic and now I'm wondering is I could write to App Insights using the SDK?

Series

App Insights for Power Platform - Part 1 - Series Overview 

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics 

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key)

App Insights for Power Platform - Part 4 - Model App Logging

App Insights for Power Platform - Part 5 - Logging for APIM 

App Insights for Power Platform - Part 6 - Power Automate Logging

App Insights for Power Platform - Part 7 - Monitoring Azure Dashboards 

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics

App Insights for Power Platform - Part 9 - Power Automate Licencing

App Insights for Power Platform - Part 10 - Custom Connector enable logging (this post)

App Insights for Power Platform - Part 11 - Custom Connector Behaviour from Canvas Apps Concern


Friday, 11 August 2023

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key)

App Insights for Power Platform - Part 1 - Series Overview 

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics 

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key) (this post)

App Insights for Power Platform - Part 4 - Model App Logging

App Insights for Power Platform - Part 5 - Logging for APIM 

App Insights for Power Platform - Part 6 - Power Automate Logging

App Insights for Power Platform - Part 7 - Monitoring Azure Dashboards 

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics

App Insights for Power Platform - Part 9 - Power Automate Licencing

App Insights for Power Platform - Part 10 - Custom Connector enable logging

App Insights for Power Platform - Part 11 - Custom Connector Behaviour from Canvas Apps Concern

Overview: Logging & monitoring for Canvas apps is done in two parts: App Insights, and using the Canvas app Monitor.  This post focuses on logging via App Insights.


Note: Once a solution that contains a Instrumentation key, they app logging key cannot be alter unless you make the environment have unmanaged layers.  You can use PowerCli and compose a new managed solution for each DTAP environment but it's a new compile for each environment.

Example:

In the annotated diagram below including a log snippet.  

1. Canvas App has an instrumentation key, the log captures the front end action

2. Calls to Dataverse & Power automate Flows are logged (relies on step 1)

3. Custom connector is calling an Azure Function (Function is logging to Log Analytics or app Insights),

4. the function logs into APIM and sends APIM a request (APIM logging is setup on the end points), and

5. APIM sends an outbound API request and captures the response (relies on step 4)

Note in this example I have Correlation tracking enabled on the Canvas App to get the full timeline, as shown below, it has been an experimental feature for a few years now.


When I turn off the Correlation, it is not as easy to trace items from start to finish.  All I get by default is the steps 3&4 data in my transaction search timeline.

All 5 pieces are still captured but the timeline has to be pieced together for tracing.


I would also enable the preview feature for logging as well as the experimental if the clients governance allows experimental features to be turned on.

Add a trace Event into your logs in a Canvas app example with a Custom Dimension
Trace("Practice | Dimension " & txtMarker.Text, TraceSeverity.Information, {appCode:"Prac-01",appInfo:"Custom Dimensions button clicked"})


Summary: Always add as many logging features as possible in Canvas Apps, think about where your logs go and also setup logging on Azure services to transaction can be traced.

Sunday, 25 June 2023

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics

Overview: If you are using ALM/DTAP environments, I want to ensure all my environments are logging to the correct Log Analytics/App Insights.  This needs to cover all services such as Canvas Apps, Functions, Service Bus,...

Canvas Apps: Open the Canvas app in Edit mode, Select "App", and check the instrumentation key points at the correct App Insights instance.

1. Ensure you have setup an application key in each Canvas App you build.



2. Ensure you turn on the logging feature (provides more logging)


3. Write custom logs (here I'm doing it using the Trace Function in a Canvas Power App

Trace("Practice | Info (1) | " & txtMarker.Text,TraceSeverity.Information)

4. Ensure you are in run mode (edit and play mode does not log)

5. Open App Insights or Log Analytics, and check the trace is coming in



Series

App Insights for Power Platform - Part 1 - Series Overview 

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics 

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key)

App Insights for Power Platform - Part 4 - Model App Logging

App Insights for Power Platform - Part 5 - Logging for APIM 

App Insights for Power Platform - Part 6 - Power Automate Logging

App Insights for Power Platform - Part 7 - Monitoring Azure Dashboards 

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics (this post)

App Insights for Power Platform - Part 9 - Power Automate Licencing

App Insights for Power Platform - Part 10 - Custom Connector enable logging (this post)

App Insights for Power Platform - Part 11 - Custom Connector Behaviour from Canvas Apps Concern

Monday, 12 June 2023

App Insights for Power Platform - Part 5 - Logging for APIM

Series

App Insights for Power Platform - Part 1 - Series Overview 

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics 

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key)

App Insights for Power Platform - Part 4 - Model App Logging

App Insights for Power Platform - Part 5 - Logging for APIM (this post)

App Insights for Power Platform - Part 6 - Power Automate Logging

App Insights for Power Platform - Part 7 - Monitoring Azure Dashboards 

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics

App Insights for Power Platform - Part 9 - Power Automate Licencing

App Insights for Power Platform - Part 10 - Custom Connector enable logging

App Insights for Power Platform - Part 11 - Custom Connector Behaviour from Canvas Apps Concern

Overview: APIM is often part of you Power Platform solutions, such as monitoring and controlling all inbound and outbound traffic or to wrap over Azure functions.

Within APIM you can add multiple App Insights Instances.  You can send all logging to a single instance an override specific API's to log to different instances.  Making the logging nice and granular.

Setup Logging

The diagram below is where i used the operation Parent Id to find a log entry using the Transaction Logs in App Insights I can see the APIM entry and the entry to the backend 3rd party and their http response
You can hook up so you can see the Canvas App Session, then the function call, which calls APIM, and then see the backend call to the gov 3rd party API.

  1. Logging can be global or set at the API level in APIM.
  2. Telemetry "Sampling" will log a percentage of requests.
  3. "Always log errors" captures any errors APIM gets.
  4. Headers and body are not included in logs unless you specify them.

Series

App Insights for Power Platform - Part 1 - Series Overview 

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics 

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key)

App Insights for Power Platform - Part 4 - Model App Logging

App Insights for Power Platform - Part 5 - Logging for APIM (this post)

App Insights for Power Platform - Part 6 - Power Automate Logging

App Insights for Power Platform - Part 7 - Monitoring Azure Dashboards 

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics

App Insights for Power Platform - Part 9 - Power Automate Licencing

App Insights for Power Platform - Part 10 - Custom Connector enable logging

App Insights for Power Platform - Part 11 - Custom Connector Behaviour from Canvas Apps Concern

Friday, 9 June 2023

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics

Series

App Insights for Power Platform - Part 1 - Series Overview 

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics  (this post)

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key)

App Insights for Power Platform - Part 4 - Model App Logging

App Insights for Power Platform - Part 5 - Logging for APIM 

App Insights for Power Platform - Part 6 - Power Automate Logging

App Insights for Power Platform - Part 7 - Monitoring Azure Dashboards 

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics

App Insights for Power Platform - Part 9 - Power Automate Licencing

App Insights for Power Platform - Part 10 - Custom Connector enable logging

App Insights for Power Platform - Part 11 - Custom Connector Behaviour from Canvas Apps Concern

 There are two ways to setup App Insights:

  1. Classical approach (soon to be removed), and
  2. The Version 2 approach also refereed to as the workspace based app Insights approach.

Image1. The Version 2/Workspace-based App Insights approach stores all new logs in Log Analytics storage.

More: Using App Insights using "version 2".  The original app insights stored it's logs within itself, this is sometimes refereed to as "classic app insights".  Classic App Insights is being deprecated so version 2 is compulsory from early 2024.  "Version 2" stores App Insights Logs in a workspace (Azure Log Analytics).

We need all our service i.e. Canvas apps, Dataverse, Power Automate, APIM, ESB, Key Vault, Azure Functions to store operation logs in App Insights workspace based approach.  We shall discuss Canvas App Logging in <App Insights for Power Platform - Part 3 - Canvas App Logging>.

Note: Operations logged to app insights store under the hood consist of 3 parts: 

1. App Id to log to

2. Content to put into the Log analytics for full logging

3. Metric data.

Setup App Insights

Open The Azure Portal.

In your subscription, you need a resource group and storage, go and add the log analytics and the app insights.

Setup the Log Analytics instance to connect the App Insights instance too.  The free tier is normally sufficient for demo purposes.

Setup App Insights using a Workspace/Log Analytics, and pls name your resources properly.

View your logs

App Insights is integrated well with all Azure services and are easily accessible.  We will go into the AppInsights Blade and look at the Logs, I added this query that will look for all logs and ordered them to show the latest first.

Note: The logs are stored in Log analytics.  To view the logs you can either use App Insights or Log analytics and the syntax is slightly different, see the image below:

Terminology Worth Understanding:

App Insights stores data in Log Analytics, you can read/write thru App Insights or Log analytics.  There is also Azure Metrics.  All of these services fall under the umbrella term of Azure Monitor.  When writing to the logs, the data is made up of 3 parts. 1, identifier for the log 2, Log analytics data that can be queried and 3, metric data. 

APIM Monitoring & Logging via Portal:

Sample Kusto Queries:

// Function used to call APIM

dependencies 

| where cloud_RoleName == "azure-func-name-01"

| where type  == "HTTP"

| where target !contains "login"

| order by timestamp desc 

// Check Outbound APIM 

requests 

| where cloud_RoleName == "devapim North Europe"

| order by timestamp desc 

// Backend data is in the customDimensions logged by APIM

dependencies   

| where type == "Backend"

| order by timestamp desc 

| extend req = tostring(customDimensions["Request-Body"])

//| project  timestamp, id, req

| where req contains "BJ69 TFF"

// Retrieve Canvas app data based on customDimensions logged 

pageViews

| extend 

    AppName = tostring(customDimensions["ms-appName"]),

       Env = tostring(customDimensions["ms-environmentId"]),

    LastSuccess = datetime_diff('minute', now(), timestamp)

| where AppName == "Bus Revenue Inspection"

| summarize by Env 

//| summarize arg_max(timestamp, *), Count = count() by AppName

//| order by LastSuccess desc

//| project LastSuccess, NoOfPageViews = Count


Example querying Azure Log Analytics for Traces I raised from a Canvas App

// KQL syntax varies slightly when querying the Log analtics rather than App Insights. 

AppTraces

| where Message contains "App Loaded with Events issue - Compliance Subject"

| extend 

    AppName = tostring(Properties["ms-appName"]),

    Env = tostring(Properties["myappEnvironment"]) // Properties is used instead of customDimensions

| order by TimeGenerated desc


Series

App Insights for Power Platform - Part 1 - Series Overview 

App Insights for Power Platform - Part 2 - App Insights and Azure Log Analytics (this post)

App Insights for Power Platform - Part 3 - Canvas App Logging (Instrumentation key)

App Insights for Power Platform - Part 4 - Model App Logging

App Insights for Power Platform - Part 5 - Logging for APIM 

App Insights for Power Platform - Part 6 - Power Automate Logging

App Insights for Power Platform - Part 7 - Monitoring Azure Dashboards 

App Insights for Power Platform - Part 8 - Verify logging is going to the correct Log analytics

App Insights for Power Platform - Part 9 - Power Automate Licencing

App Insights for Power Platform - Part 10 - Custom Connector enable logging

App Insights for Power Platform - Part 11 - Custom Connector Behaviour from Canvas Apps Concern

Saturday, 18 March 2023

Canvas Apps Workflow logging linkage

Overview:  Power Automate has good monitoring and analysis within the product, and Canvas apps use instrumentation to App Insights and allow for custom tracing.  The issue is linking Canvas app logs to a called workflow.  In this video (2min), I discuss linking Traces in Azure App Insights with flows run on Power Automate.

By passing the the workflow runs workflow back to the calling canvas app, a deep link will allow support to follow a users activity including drilling into the flows being called.



Add additional logging to Azure Log analytics to a custom table within your flows:


Sunday, 19 February 2023

Setting up Azure Application Insights for Monitoring Power Platform Canvas Apps

Overview: We are building key applications in Power Apps.  It is essential that the appropriate level of monitoring, alerting, and ability to trace is setup.  The diagram below provides an overview of likely solutions.  

The top half of the diagram should call out the client components used in the application, you need to add instrumentation keys to ensure the logging is done to the correct DTAP environment instance, i.e., production components used in the solution must point to the production instance of Azure Application Insights.  The diagram only deals with Production.  I prefer to point the lower env to a non-pro instance.

The bottom half discuss decisions that are key to make the monitoring and alerting successful.  It should aim to:
  1. Ensure errors are detected, 
  2. Can be traced,
  3. System is healthy, 
  4. Are any components down,
  5. Performance is stable or not,
  6. Warn me before the system goes down,
  7. Alerting is setup (don't over alert and ensure it is focused to the right people), and
  8. Validate deployments.     
I turn on the experimental features:

A gotcha to App Insights in Canvas apps applies to Managed solutions, if you add an app insights instrumentation key, or leave it blank, there is no easy way to override the value.  You can add an unmanaged layer but the issue is when you next deploy the app only updates the the new version once the unmanaged layer is removed and you will need then then manually add the Unmanaged layer after each deployment with the appropriate app insights instrumentation key.  There are workarounds with extracting the solutions and amending the setting, then repackaging using the Power Apps CLI but it has issues.


Other thoughts:
It is a good idea to use the App Insight SDK's to trace key info within each service
Power Automate should use the try catch function pattern for logging.  I log to Azure Log Analytics using the built in Power Apps connector.

Saturday, 11 February 2023

Audit log retention in Dataverse

Overview: Audit data log retention is now fairly easy to implement in Dataverse, you can set whatever is audited and set the for how long duration easily.

Thoughts: As a simple version, I'd audit all changes into the Dataverse and set the retention to 7 years.  Now this could end up costing you a considerable amount of money so consider, do I need to audit everything, do I need to retain this long, can a use a long term storage retention approach.  There are a variety reasons for customising Datavervse data retention including: to comply with laws and potentially the need for litigation, to comply with industry standards/certification, and to keep a full history to understand why we have the current data position.
  
Ultimately, I need to identify/understand how to store audit history, clean up when no longer needed, ensure it is no affecting you live system performance, and can be retrieved by authorised people in the timeline required for each project or at an enterprise level.

If a system changes a lot and uses blobs, the audit history will be large and Dataverse is not necessarily the best place to store long term audit history.

Technical: Dataverse stores data in an Audit entity (table), the infrastructure has been changed in late 2022 to handle the audit data separately to allow for better non-functional requirements to available.

Saturday, 14 January 2023

APIM Logging

Overview: Azures API Management is a big service, it is worth understanding the logging capability so you can effectively analyse traffic.

Thoughts:

  • Multiple App Insights can be setup with default logs going to a specific App Insights.
  • Each API can be overridden to log to any of the API's added to API.
  • The old "Classic" App Insights, stored data internally, whereas the new "workspace-based" app insights", I think of it it as "V2 App Insights connected to a Log Analytics", the new data is stored in the workspace.
  • If you upgrade App Insights, the results blend from two storage locations, the old data stored internally with App Insights and the new data stored within Log Analytics - if you query Log analytics, you only see the new log analytics data.
  • Security for App Insights should be done at the Resource Group (RG) level, ther are AppInsight roles for use at RG level, if the workspace is on a different resource group to the app insights connected instance, ensure you sort out the permssions in both RGs.
  • Open Telemetary project is making strides forward, and for API's it will be great.

Problem: I recently migrated a customer Dev, Test, Appearance, Pre-prod and Production (Not yet) to use the AppInsights instance running on Log Analytics (sometimes refereed to as V2).  Logging wasn't work correctly.


Initial Hypothesis: I have complicated resource groups differing crossing DTAP boundaries.  By default, APIM has a logging catch all setup per APIM instance and then specific API's settings are changed to log to specific App Insights.

Steps:

My AppInsights instance was to rename the old classic type AppInsights e.g. "appinsights-dev" becames "appinsights-dev-delete".

Create a new AppInsights instance using the V2 Log Analytics option  and name it the original name "".  The client opted for the name to be the same.  It would be simpler to give it a name like "appinsights-dev02".  The clients also wanted to use a shared Log Analytics instance per env e.g. "loganalytics-dev-shared".






Monday, 24 January 2022

CorrelationId thoughts for improved logging in SPAs

Problem:  Single Page Applications (SPA) generate a new correlationId/guid on path changes only, when logging to something like App Insights, the SPA using a framework like Angular will have a page view with multiple actions that are logged using the same guid.  

Initial Hypothesis: You can work out the users journey by using the page view guid and tracing the actions to drill down to the issue.  It is far easier to generate a new guid for each action making error tracing simpler/faster for 1st line support.  Also performance issues are far easier to replicate and automate reporting on for changes in performance.   

SPA/Angular Resolution:

  1. import{ Injectable } from'@angular/core'; 

  1. import{ ApplicationInsights } from'@microsoft/applicationinsights-web'; 

  1. import{ environment } from'src/environments/environment'; 

  1.  

  1. @Injectable({ 

  1.   providedIn:'root' 

  1. }) 

  1. exportclassAppinsightsLoggingService { 

  1.   appInsights: ApplicationInsights; 

  1.   constructor() { 

  1.     this.appInsights = newApplicationInsights({ 

  1.       config: { 

  1.         instrumentationKey:environment.appInsights.instrumentationKey, 

  1.         enableRequestHeaderTracking:true, 

  1.         enableCorsCorrelation:true, 

  1.         loggingLevelTelemetry:1, 

  1.         enableAutoRouteTracking:true// option to log route changes 

  1.       } 

  1.     }); 

  1.     this.appInsights.loadAppInsights(); 

  1.     this.appInsights.trackPageView(); 

  1.   } 

  1.  

  1.   logPageView(name?: string, url?: string) { // option to call manually 

  1.     alert(name); 

  1.     this.appInsights.trackPageView({ 

  1.       name:name, 

  1.       uri:url 

  1.     }); 

  1.   } 

  1. } 


  1. public getTraceId (){ 

  1.     returnthis.appInsights.context.telemetryTrace.traceID; 

  1.   } 

  2. // Call when needed


Note: Thanks to Pravesh Chourasia for showing me how to do this.