Friday 17 February 2023

Postman Monitor for Continuous Monitoring and Alerting in MS Teams

Overview: Pretty much every tester and developer loves postman. And that is because it makes our lives easier and it just plain awesome.  Postman is bringing out tons of new features and I was playing around today looking how I could do continuous monitoring with my postman collections.

Thoughts & Playing:

I have a postman collection that runs 8 requests and does 14 asserts.  The first request gets a new OAuth token using AAD login.  Then I do a series of requests and I do an assert to ensure I am getting a 200 response code and that the response time is less than 3 seconds on each call.  I can run the collection locally.  Level 100 API verification looks good.

In the past, I have taken this collection and run it as a shortcut on my desktop using Powershell with the Postman CLI to display me the results.  Makes my life easier.

I then added Elgato stream deck so I can run the monitor with a single button push (more me playing than real value).  I'd say I'm at level 200 in continuous monitoring capability.

Next, I setup a monitor on the collection, and this allows me to login and view the dashboard and trace, great stuff, and I get an email if anything goes wrong as an alert.  So now I'm getting serious about monitoring and alerting on my API's.  Level 300 is approaching.




Postman monitoring has integration for MS Teams, and Slack.  It also can send logs to Data Dog and New Relic but now Application Insights (recon this will come soon).  I setup a channel in teams to have a webhook, and I can send in the results using Postman but it's way easier to use the integration on the Monitor to push the result of each run or automatically after 3 failures.

Summary:  This Postman monitoring allows me to send detailed API requests at different intervals so I'm thinking for production: 
  • 5 min for health and basic check (look for performance and service slowdown or failure; add alerts but don't over alert so use teams except if service breaks then Teams groups),
  • Hourly, check key functionality/API's including CRUD operations and clean up (ensure the service is operating for most key endpoints), and
  • Daily, in the early hours run a full regression API set of tests, and clean up afterwards (Support/help desk need to review each day).
Don't over alert, let me say that again don't over alert.  Alerting is like water, you definitely a little but floods are not great.  So with Teams & Slack, it's easy to push results and issues into a channel so key people are aware, and it gives a much better experience than email alerting.

I like the idea of using Postman as it's infrastructure is separate as I generally use the Azure/MS stack including Application insights. 

What Next:  I'd like to figure out how to push results into my logs for reporting off a single source.  I could embed the postman monitoring into iFrames but I'd probably use an Azure logic apps Azure function to listen for the Postman POST, then I can format adaptive cards for Teams, and outlook, easily integrate Twilio for SMS or maybe What's app.  From the logic app i can use a Application Insights SDK to add Tracing.  

Combining with Correlation Id's and App Insights, I can see issues, have them summarised, get the right level of alerting, trace specific issues quickly.  Ideally we capture issues before customers report them. and if a customer reports and issue it can be 100% traced, remediated and fixed for all customers quickly.  Changes to API's and compatibility is also a nice benefit of this approach.

  


Sunday 12 February 2023

Adding Adaptive Card messages into Teams using Postman

 WIP: I've wanted to play with adaptive cards for awhile, but this post is about using the WebHooks that teams can expose to that adaptive cards and be pushed into a Channel.

1. Configure a teams channel to support incoming webHooks

2. Run a postman POST request to push the card into the MS Teams channel


Tip: Ensure you add the header to the postman request:




Saturday 11 February 2023

Audit log retention in Dataverse

Overview: Audit data log retention is now fairly easy to implement in Dataverse, you can set whatever is audited and set the for how long duration easily.

Thoughts: As a simple version, I'd audit all changes into the Dataverse and set the retention to 7 years.  Now this could end up costing you a considerable amount of money so consider, do I need to audit everything, do I need to retain this long, can a use a long term storage retention approach.  There are a variety reasons for customising Datavervse data retention including: to comply with laws and potentially the need for litigation, to comply with industry standards/certification, and to keep a full history to understand why we have the current data position.
  
Ultimately, I need to identify/understand how to store audit history, clean up when no longer needed, ensure it is no affecting you live system performance, and can be retrieved by authorised people in the timeline required for each project or at an enterprise level.

If a system changes a lot and uses blobs, the audit history will be large and Dataverse is not necessarily the best place to store long term audit history.

Technical: Dataverse stores data in an Audit entity (table), the infrastructure has been changed in late 2022 to handle the audit data separately to allow for better non-functional requirements to available.

Sunday 5 February 2023

Wednesday 1 February 2023

SaaS Product surrounding services

Onboarding - Ability to allow customers to sign up to trials, sign-up as a customer, payments and billing.

Sales Channels - Ability to bring on new customers from various channels like other websites, digital adverts, telephone,... Can be from very simple to supper complicated.

Service Page - All clients to see the current status of your SaaS products e.g. 

https://portal.office.com/servicestatus

https://status.quickbooks.intuit.com

A lot of incident management software offers white labelled status page

SaaS solutions include: 

statuscast.com

statushub.com

Sunday 15 January 2023

Postman to verify OpenAPI's are running

Problem:  Our teams rely on a 3rd party API for a new project being delivered, the API's are in a state of change and are constantly up and down making life tough for the teams replying on the API.

Hypothesis:  I need a quick way to check the API's to see if they are all working in dev, and test.  I have two postman collections for the REST API's.  If i combine them and check the key API's using postman I can save myself and other time as I'll know the current state of the API's.

Solution: Create a site collection that does the API verification, you can make it more complex with data and variables.

Problem:  I can open Postman and run the test which takes a few minutes.  We need to do this quicker.

Hypothesis: I'd like to be able to run the tests quickly on demand.  Use postman CLI and Powershell to run the collection and display the result.

Solution

1) Add the Postman CLI to my machine:

PS> powershell.exe -NoProfile -InputFormat None -ExecutionPolicy AllSigned -Command "[System.Net.ServicePointManager]::SecurityProtocol = 3072; iex ((New-Object System.Net.WebClient).DownloadString('https://dl-cli.pstmn.io/install/win64.ps1'))"

2) In postman generate an API Key for the Collection > Run Collection > Automate runs via CLI > Generate the API Key > Copy the generated code


3) Run the code in PS to verify it works correctly.

4) Copy the PS code into a newly Created ps1 file on your local machine, I added a read line so I can see the result.


5) Run the API.ps1 file and verify the result

6) Setup a desktop short-cut to run and see the result.  Right click the API.ps1 file and create a shortcut on your desktop.  Right click and amend the target and amend the target value:

C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -ExecutionPolicy Bypass -File C:\Users\PaulBeck\Downloads\Projects\PoC\Postman\API.ps1

7) Save and run the shortcut to verify.

Problem:  Monitor and alert DTAP API's are working and performance

Resolution: I want to monitor that my endpoints specified in my Postman collection in Dev, UAT et al. are working, can be more than 1 endpoint using Postman Monitor.

Next steps: Add to automated DevOps processes, using Newman.

Saturday 14 January 2023

APIM Logging

Overview: Azures API Management is a big service, it is worth understanding the logging capability so you can effectively analyse traffic.

Thoughts:

  • Multiple App Insights can be setup with default logs going to a specific App Insights.
  • Each API can be overridden to log to any of the API's added to API.
  • The old "Classic" App Insights, stored data internally, whereas the new "workspace-based" app insights", I think of it it as "V2 App Insights connected to a Log Analytics", the new data is stored in the workspace.
  • If you upgrade App Insights, the results blend from two storage locations, the old data stored internally with App Insights and the new data stored within Log Analytics - if you query Log analytics, you only see the new log analytics data.
  • Security for App Insights should be done at the Resource Group (RG) level, ther are AppInsight roles for use at RG level, if the workspace is on a different resource group to the app insights connected instance, ensure you sort out the permssions in both RGs.
  • Open Telemetary project is making strides forward, and for API's it will be great.

Problem: I recently migrated a customer Dev, Test, Appearance, Pre-prod and Production (Not yet) to use the AppInsights instance running on Log Analytics (sometimes refereed to as V2).  Logging wasn't work correctly.


Initial Hypothesis: I have complicated resource groups differing crossing DTAP boundaries.  By default, APIM has a logging catch all setup per APIM instance and then specific API's settings are changed to log to specific App Insights.

Steps:

My AppInsights instance was to rename the old classic type AppInsights e.g. "appinsights-dev" becames "appinsights-dev-delete".

Create a new AppInsights instance using the V2 Log Analytics option  and name it the original name "".  The client opted for the name to be the same.  It would be simpler to give it a name like "appinsights-dev02".  The clients also wanted to use a shared Log Analytics instance per env e.g. "loganalytics-dev-shared".