Sunday 19 February 2023

Setting up Azure Application Insights for Monitoring Power Platform Canvas Apps

Overview: We are building key applications in Power Apps.  It is essential that the appropriate level of monitoring, alerting, and ability to trace is setup.  The diagram below provides an overview of likely solutions.  

The top half of the diagram should call out the client components used in the application, you need to add instrumentation keys to ensure the logging is done to the correct DTAP environment instance, i.e., production components used in the solution must point to the production instance of Azure Application Insights.  The diagram only deals with Production.  I prefer to point the lower env to a non-pro instance.

The bottom half discuss decisions that are key to make the monitoring and alerting successful.  It should aim to:
  1. Ensure errors are detected, 
  2. Can be traced,
  3. System is healthy, 
  4. Are any components down,
  5. Performance is stable or not,
  6. Warn me before the system goes down,
  7. Alerting is setup (don't over alert and ensure it is focused to the right people), and
  8. Validate deployments.     
I turn on the experimental features:

A gotcha to App Insights in Canvas apps applies to Managed solutions, if you add an app insights instrumentation key, or leave it blank, there is no easy way to override the value.  You can add an unmanaged layer but the issue is when you next deploy the app only updates the the new version once the unmanaged layer is removed and you will need then then manually add the Unmanaged layer after each deployment with the appropriate app insights instrumentation key.  There are workarounds with extracting the solutions and amending the setting, then repackaging using the Power Apps CLI but it has issues.


Other thoughts:
It is a good idea to use the App Insight SDK's to trace key info within each service
Power Automate should use the try catch function pattern for logging.  I log to Azure Log Analytics using the built in Power Apps connector.

Friday 17 February 2023

Postman Monitor for Continuous Monitoring and Alerting in MS Teams

Overview: Pretty much every tester and developer loves postman. And that is because it makes our lives easier and it just plain awesome.  Postman is bringing out tons of new features and I was playing around today looking how I could do continuous monitoring with my postman collections.

Thoughts & Playing:

I have a postman collection that runs 8 requests and does 14 asserts.  The first request gets a new OAuth token using AAD login.  Then I do a series of requests and I do an assert to ensure I am getting a 200 response code and that the response time is less than 3 seconds on each call.  I can run the collection locally.  Level 100 API verification looks good.

In the past, I have taken this collection and run it as a shortcut on my desktop using Powershell with the Postman CLI to display me the results.  Makes my life easier.

I then added Elgato stream deck so I can run the monitor with a single button push (more me playing than real value).  I'd say I'm at level 200 in continuous monitoring capability.

Next, I setup a monitor on the collection, and this allows me to login and view the dashboard and trace, great stuff, and I get an email if anything goes wrong as an alert.  So now I'm getting serious about monitoring and alerting on my API's.  Level 300 is approaching.




Postman monitoring has integration for MS Teams, and Slack.  It also can send logs to Data Dog and New Relic but now Application Insights (recon this will come soon).  I setup a channel in teams to have a webhook, and I can send in the results using Postman but it's way easier to use the integration on the Monitor to push the result of each run or automatically after 3 failures.

Summary:  This Postman monitoring allows me to send detailed API requests at different intervals so I'm thinking for production: 
  • 5 min for health and basic check (look for performance and service slowdown or failure; add alerts but don't over alert so use teams except if service breaks then Teams groups),
  • Hourly, check key functionality/API's including CRUD operations and clean up (ensure the service is operating for most key endpoints), and
  • Daily, in the early hours run a full regression API set of tests, and clean up afterwards (Support/help desk need to review each day).
Don't over alert, let me say that again don't over alert.  Alerting is like water, you definitely a little but floods are not great.  So with Teams & Slack, it's easy to push results and issues into a channel so key people are aware, and it gives a much better experience than email alerting.

I like the idea of using Postman as it's infrastructure is separate as I generally use the Azure/MS stack including Application insights. 

What Next:  I'd like to figure out how to push results into my logs for reporting off a single source.  I could embed the postman monitoring into iFrames but I'd probably use an Azure logic apps Azure function to listen for the Postman POST, then I can format adaptive cards for Teams, and outlook, easily integrate Twilio for SMS or maybe What's app.  From the logic app i can use a Application Insights SDK to add Tracing.  

Combining with Correlation Id's and App Insights, I can see issues, have them summarised, get the right level of alerting, trace specific issues quickly.  Ideally we capture issues before customers report them. and if a customer reports and issue it can be 100% traced, remediated and fixed for all customers quickly.  Changes to API's and compatibility is also a nice benefit of this approach.

  


Sunday 12 February 2023

Adding Adaptive Card messages into Teams using Postman

 WIP: I've wanted to play with adaptive cards for awhile, but this post is about using the WebHooks that teams can expose to that adaptive cards and be pushed into a Channel.

1. Configure a teams channel to support incoming webHooks

2. Run a postman POST request to push the card into the MS Teams channel


Tip: Ensure you add the header to the postman request:




Saturday 11 February 2023

Audit log retention in Dataverse

Overview: Audit data log retention is now fairly easy to implement in Dataverse, you can set whatever is audited and set the for how long duration easily.

Thoughts: As a simple version, I'd audit all changes into the Dataverse and set the retention to 7 years.  Now this could end up costing you a considerable amount of money so consider, do I need to audit everything, do I need to retain this long, can a use a long term storage retention approach.  There are a variety reasons for customising Datavervse data retention including: to comply with laws and potentially the need for litigation, to comply with industry standards/certification, and to keep a full history to understand why we have the current data position.
  
Ultimately, I need to identify/understand how to store audit history, clean up when no longer needed, ensure it is no affecting you live system performance, and can be retrieved by authorised people in the timeline required for each project or at an enterprise level.

If a system changes a lot and uses blobs, the audit history will be large and Dataverse is not necessarily the best place to store long term audit history.

Technical: Dataverse stores data in an Audit entity (table), the infrastructure has been changed in late 2022 to handle the audit data separately to allow for better non-functional requirements to available.

Sunday 5 February 2023

Wednesday 1 February 2023

SaaS Product surrounding services

Onboarding - Ability to allow customers to sign up to trials, sign-up as a customer, payments and billing.

Sales Channels - Ability to bring on new customers from various channels like other websites, digital adverts, telephone,... Can be from very simple to supper complicated.

Service Page - All clients to see the current status of your SaaS products e.g. 

https://portal.office.com/servicestatus

https://status.quickbooks.intuit.com

A lot of incident management software offers white labelled status page

SaaS solutions include: 

statuscast.com

statushub.com

Sunday 15 January 2023

Postman to verify OpenAPI's are running

Problem:  Our teams rely on a 3rd party API for a new project being delivered, the API's are in a state of change and are constantly up and down making life tough for the teams replying on the API.

Hypothesis:  I need a quick way to check the API's to see if they are all working in dev, and test.  I have two postman collections for the REST API's.  If i combine them and check the key API's using postman I can save myself and other time as I'll know the current state of the API's.

Solution: Create a site collection that does the API verification, you can make it more complex with data and variables.

Problem:  I can open Postman and run the test which takes a few minutes.  We need to do this quicker.

Hypothesis: I'd like to be able to run the tests quickly on demand.  Use postman CLI and Powershell to run the collection and display the result.

Solution

1) Add the Postman CLI to my machine:

PS> powershell.exe -NoProfile -InputFormat None -ExecutionPolicy AllSigned -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://dl-cli.pstmn.io/install/win64.ps1'))"

2) In postman generate an API Key for the Collection > Run Collection > Automate runs via CLI > Generate the API Key > Copy the generated code


3) Run the code in PS to verify it works correctly.

4) Copy the PS code into a newly Created ps1 file on your local machine, I added a read line so I can see the result.


5) Run the API.ps1 file and verify the result

6) Setup a desktop short-cut to run and see the result.  Right click the API.ps1 file and create a shortcut on your desktop.  Right click and amend the target and amend the target value:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Bypass -File C:\Users\PaulBeck\Downloads\Projects\PoC\Postman\API.ps1

7) Save and run the shortcut to verify.

Problem:  Monitor and alert DTAP API's are working and performance

Resolution: I want to monitor that my endpoints specified in my Postman collection in Dev, UAT et al. are working, can be more than 1 endpoint using Postman Monitor.

Next steps: Add to automated DevOps processes, using Newman.