Skip to main content

Alerting

Alerting configuration for Netasampark monitoring.

Alert Channels

Email Alerts

  • Recipients: Admin team
  • Frequency: Immediate for critical, daily digest for warnings
  • Format: HTML with details

SMS Alerts

  • Recipients: On-call engineers
  • Frequency: Critical alerts only
  • Provider: Twilio

Slack Alerts

  • Channel: #alerts
  • Frequency: All alerts
  • Format: Rich messages with context

PagerDuty

  • Integration: For critical alerts
  • Escalation: Automatic escalation
  • On-call: Rotating schedule

Alert Rules

Critical Alerts

- alert: ServerDown
expr: up == 0
for: 1m
annotations:
summary: "Server is down"
description: "Server {{ $labels.instance }} is down"

- alert: HighErrorRate
expr: rate(http_requests_total{status=~"5.."}[5m]) > 0.05
for: 5m
annotations:
summary: "High error rate detected"
description: "Error rate is {{ $value }} requests/sec"

Warning Alerts

- alert: HighResponseTime
expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 2
for: 10m
annotations:
summary: "High response time"
description: "95th percentile response time is {{ $value }}s"

Alert Configuration

Laravel Alerts

// app/Providers/AppServiceProvider.php
if (app()->environment('production')) {
if ($errorRate > 0.05) {
Alert::critical('High error rate detected')
->toSlack('#alerts')
->toEmail('admin@netasampark.com');
}
}

Notification Templates

Email Template

<h2>Alert: {{ $alert->title }}</h2>
<p><strong>Severity:</strong> {{ $alert->severity }}</p>
<p><strong>Time:</strong> {{ $alert->timestamp }}</p>
<p><strong>Description:</strong> {{ $alert->description }}</p>
<p><strong>Details:</strong></p>
<pre>{{ json_encode($alert->context, JSON_PRETTY_PRINT) }}</pre>

Best Practices

  1. Alert Fatigue: Avoid too many alerts
  2. Clear Messages: Descriptive alert messages
  3. Actionable: Include remediation steps
  4. Escalation: Proper escalation policies
  5. Testing: Regular alert testing
  6. Documentation: Document alert procedures