Sunday 14 May 2023

Dynatrace Product Play

 Dynatrace is pretty similar to Azure Monitor.

  • Dynatrace (really good if you use multi cloud) Dynatrace - Saas offering is on AWS.  Can be on-prem.  
  • Making workloads observable is using Logs, Traces, Events, and metrics into Dynatrace.  From these ingested events we analyse and can automate behavior. 
  • OneAgent is deployed on the Compute i.e. VM, Kubernetes.  Can import logs from other SIEMs or Azure Monitor, so you can eventually get Azure service logs such as App Service or Service Bus.  
  • Does Full stack and includes code-level and applications and infrastructure monitoring, also can show User monitoring.  
  • Dynatrace offers scalable API's that are sitting on Kubernetes.  
  • "Davis" is the AI engine used to help figure out the problems.  
  • Alerting is solid.  
  • Dynatrace can log against 1) network/Infra 2) SDK 3) DEM (User monitoring,..)  logs, traces, metrics are ingested either using OneAgent or OpenTelemetry.
  • Management Zones - user only see's information they have access to and need.
  • Define a Site Reliability Guardian (SRG) to each program/project, this allows you to identify thru RAG boards the current and recent state of the various pieces.  There are Guardian templates to use as a starting point.
  • W3C Trace Context is used - it allows for end-to-end tracing.  OpenTelemetry or Dynatrace keep the trace and provide in headers (traceparent.
  • Create documentation and tutorials for Dynatrace.  Dynatrace has a playground tenant for playing on.
High-level Architecture hosted on AWS.

High-level architecture for capturing logs et al. and then using the data.

Product Screen Shots:






Azure & Dynatrace
  • Abnormality detection using AI. shall greatly improve observability and security. 
  • End-to-end visibility is what makes it so amazing.
  • Enterprises often use Dynatrace as there central SIEM solution, shipping from Azure in Dynamics takes planning but works well, categorise and ensure the right into is pushed into Dynatrace.  
  • Dynatrace is the leader in Gartner and Forrester in it's space.
  • Grail - lake house, schema-less, allows for easy fast query.  Massive scale.  Bring all data together and query at hyperscale.  Grail is in 15 regions either on AWS, Azure, or GCP for customers to use.  UK looks like AWS only. 
  • Grail: Record level protection, masking data, support access controls (elevate privileges).  
Dynatrace architecture for Grail from Barcelona conference 5 Oct 2023.

Collect all events in Grail, automate the process of identify suspicious activity relating to security.  Faster reaction time.

Azure offers Dynatrace as a SaaS service
Updated 16 Feb 20224



Wednesday 26 April 2023

Uploading files to a Dataverse table using the "Add picture" control in a Canvas App

Overview: I need to be able to upload pictures or an file for that matter and persist the file in a Dataverse entity.  This took me a little longer than it should have.

Notes: I added a "Add picture" control to the screen, I also added a label and a icon ro Clear the uploaded image.

I also added a save button, here I persist to a Dataverse table named "Evidence", The Evidence table has a column called "File" of type "File". The part that took me awhile in the Patch statement was getting the file into the correct format (as it is a record not a picture).  This works for any file upload but has the drawback of needing to change the lookup to "All file types".  

Saturday 22 April 2023

Microsoft's Well-Architected

Overview: Goal is to make using Azure and your IT function operates as optimally as possible through performance, scale-abity, minimise costs, reliability, optimise devOps, test failures, geo-data sovereignty, app security, Auth security.  It is to be done consciously: 1) Collect/Gather (Well-Architected Review) > 2) Analyse > 3) Advise (build plan) > 4) Implemented  

Five Pillars:

  1. Security;
  2. Performance Efficiency;
  3. Reliability;
  4. Cost Optimisation;
  5. Operational Excellence;

Tons of Tools are part of the Well-Architected framework:

  • Azure Advisor, analyse workload - gives possible recommendations/improvements - can import into the Azure Well-Architected Review Tool.
  • Azure Well-Architected Review Tool (amazing) - Answer qus and input Azure Advisor recommendations. Pick an area, and pick one or more of the five pillars, then work thru the the Azure Well-Architected Review Tool, to get a milestone and then look to implement iteratively.

  • Well-architected checklist
  • Provides templates to complete actions e.g., RPO/RTO, security threat analysis, treat modelling (STRIDE is MS same as DREAD)

1. Security Pillar (Protect, Detect and respond to threats)

Tools: Monitor from Azure Security Centre (ACS) NB!, Azure Defender and Microsoft 365 Defence make up Azure Sentinel (SIEM), can stream SIEM from On-prem..

2. Performance Efficiency Pillar
Trade off of cost with reliability, scale and performance.  Chaos testing - test breaking/removing resources to mimic problems.  Monitor performance resources for reliance, and performance.  Do you want to dynamically scale, react to performance or increase load to increase/scale the services.  Cache (Redis) and in as many layers as possible.  Multiple regions/zones for services close to users (paired regions are a good idea), zones, region reliance can affect performance.  Health model in effect is ensure you have monitors and alerts to verify you systems heath think Azure Dashboards or Grafana.

3. Reliability Pillar
High Availability (HA) & Resilience of Azure Resources

4. Cost Optimisation Pillar
  • Understand cost (choose the right service e.g. CosmosDB can be cheaper than SQL or vis versa)
  • Optimise (remove orphaned resources, reservation vs PAYG (licence optimisation, scale consumption when needed/optimise instances), be pragmatic in cost to benefit/cost trade-offs).  Cost modelling to understand what the cost is likely to be going forward.  Good RTP/RTO and multi-geo is expensive but if you need it.  Design choices affect the cost.  Optimise data transfers, auto-scalability (vertical and horizontal both expansion and reduction of resources). Use Azure Cost Management Tool.  Automating provisioning helps with cost as the correct resource provisioning is implemented.  Bicep is stateful whereas ARM is stateless.
  • Control costs going forward (Review periodically/constantly, use alerts  to monitor usage).  Monitoring your resource usage, can it be reduced.  
5. Operation Excellence Pillar

Dr. Kai Dupé presented the Well-Architected Framework on behalf of Microsoft 20 April & 21 April 2023 where I took notes to build this post.

Monday 10 April 2023

Postman automation reminders

Also see "Postman to check Open API's are Running"

Fire Postman collections on demand using curl

A monitor is already setup: I need the postman monitor id and an API key

Run local postman collection using Newman via Powershell (call from CI pipelines or a short-cut on the desktop)




Thursday 6 April 2023

Runas on Flows

Overview: If I use a connection in a Canvas App, the signed in user uses their own permissions and the connector as as the signed in user.  

Problem: I wish to run a flow as a specific user and not the users calling the flow from the Canvas app.

Hypothesis: I wish to call logging connector into Log Analytics, so I have created a flow, If I use the Power Apps V2 connector, it offers and option to run in another users context.

Resolution: Open the Workflow, ensure you are using the Power Apps V2 trigger, then...


Here I use the Scopes to perform a Try Catch finally set of logic


Tip:  most people tend to use a custom connector to push the error message into a function from the Workflow, the function app uses the App Insights SDK and logs the workflow error.

Simple C# code to write to App Insights using the SDK. 

#r "Newtonsoft.Json"
using System.Net;
using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Primitives;
using Newtonsoft.Json;
public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
int eventId = data?.eventId;
string errType = data?.errType;
string errMsg = data?.errMsg;
string correlationId = data?.correlationId;
string workflowId = data?.workflowId;
string workflowUrl = data?.workflowUrl;
string flowDisplayName = data?.flowDisplayName;
var custProps = new Dictionary<string, object>()
{
{ "CorrelationId", correlationId},
{ "WorkflowId", workflowId},
{ "WorkflowUrl", workflowUrl},
{ "WorkflowDisplayName", flowDisplayName}
};
using (log.BeginScope(custProps))
{
if (errType=="Debug") { log.Log(LogLevel.Debug, eventId, $"{errMsg}"); }
else if (errType=="Trace") { log.Log(LogLevel.Trace, eventId, $"{errMsg}"); }
else { log.LogInformation($"Event is {eventId}, type is {errType}, and msg is {errMsg}");}
};
string responseMessage = $"This HTTP triggered function executed successfully. {errType} - {errMsg}";
return new OkObjectResult(responseMessage);
}

More Info:

Reza Dorrani has a great recording showing running Power Automate flows using elevated/shared accounts.