Showing posts with label TCO. Show all posts
Showing posts with label TCO. Show all posts

Wednesday 31 January 2024

Low code ROI/TCO and observability - Monitoring Low code Platforms

Overview: With Low code gaining tremendous traction, the rate at which apps are being built is increasing quickly.   More citizens can build and pro developers can build more.

Governance helps maintain the quality of the apps and their code, but as an app needs to be updated, it becomes significantly expensive to maintain and update the low code solutions.  The P-F curve has been around for awhile for regular assets (I first saw this 20 years ago with Mining equipment in Southern Africa, Machine maintenance on the mines help significantly reduce the cost of the assets/equipment and reduce catastrophic outages, anyway I've digressed...).  

Microsoft Low Code/Power Platform:
All low-code platforms have the same issues, governance improves the situation but it does not address the P-F interval, and monitoring comes into play.  Microsoft's Canvas Apps has the ability to ship logs to Azure Monitor (App Insights) and there are internal analytics available on the platform.  These are great and should be implemented.  Now most enterprises have multiple environments and tenants in Power Platform, ALM is used to improve the quality of the application being  released.
Canvas apps has an internal testing tool called Test Studio which is fairly limited and yet still underused.  Tests can be recorded, on deployment releases the recorded rules can be run and reported on in ADO or any CI/CD pipeline.  

You can also make web calls from CI/CD or a dedicated service to continually run advanced availability tests, you should do this.  It ensure we pick up when part or all of the process changes.  And you can setup alerts and respond.

Tools:
You can also log to SIEM (enterprise logging tools) and use this for Monitoring.  For instance, you can ship you App Insights/Log Analytics/Azure Monitor logs to Dynatrace so it is part of your overall monitoring and response strategy.

You can continuously monitor API using CI/CD tools like ADO but I prefer to use Postman's Enterprise Infrastructure service (it's awesome).  Similarity, I am a full convert to Microsoft Playwright (I was a Selenium and Cypress fan, not i'm only Playwright).

Grafana, Power BI, Azure Dashboards all have different pros and cones for monitoring.  There is also some great AI stuff coming out at the moment around UI testing.  Specifically, I've seen Dynatrace, Playwright and BrowserStack use AI to compare screenshots for test validation.

I'll be expanding on automation, continuous testing in this series of posts.  These are my initial thoughts and there is a lot of good results coming out.