Skip to main content

Runtime Logging and Monitoring with gcloud

Overview

This guide provides step-by-step instructions for monitoring and analyzing runtime logs in Firebase App Hosting services using the gcloud CLI. This guide helps you access application logs, monitor performance, and understand system behavior in Cloud Run.


Quick Start

1. Identify Your Service

First, find the Cloud Run service name for your project:

# List all Cloud Run services
gcloud run services list --project=PROJECT_ID

Common Service Names:

  • toto-bo-stg: toto-bo-stg-backend
  • toto-bo: toto-bo-backend
  • toto-f9d2f-stg: toto-f9d2f-stg-backend
  • toto-f9d2f: toto-f9d2f-backend

2. Get Recent Application Logs

# Get recent application logs for a specific service
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND resource.labels.location=\"us-central1\" \
AND severity>=ERROR" \
--project=PROJECT_ID \
--limit=20 \
--format=json

Example for toto-bo-stg:

gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"toto-bo-stg-backend\" \
AND resource.labels.location=\"us-central1\"" \
--project=toto-bo-stg \
--limit=20 \
--format=json

Step-by-Step Monitoring

Step 1: Check HTTP Request Logs

To monitor HTTP requests and responses, check the request logs:

gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND resource.labels.location=\"us-central1\" \
AND httpRequest.status>=500" \
--project=PROJECT_ID \
--limit=10 \
--format=json

This shows you:

  • Which endpoints are being accessed
  • Response status codes
  • Request URLs
  • Timestamps

Step 2: Get Application Logs

To see detailed application output, get application logs from stdout/stderr:

# Get application logs (stdout/stderr)
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND resource.labels.location=\"us-central1\" \
AND (logName:\"stdout\" OR logName:\"stderr\")" \
--project=PROJECT_ID \
--limit=50 \
--format=json

PowerShell Note: In PowerShell, use backticks for line continuation:

gcloud logging read `
"resource.type=cloud_run_revision `
AND resource.labels.service_name=`"toto-bo-stg-backend`" `
AND resource.labels.location=`"us-central1`" `
AND (logName:`"stdout`" OR logName:`"stderr`")" `
--project=toto-bo-stg `
--limit=50 `
--format=json

Step 3: Filter for Specific Log Patterns

Search for specific log messages:

# Search for specific log text
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND resource.labels.location=\"us-central1\" \
AND textPayload:\"Failed to parse private key\"" \
--project=PROJECT_ID \
--limit=10 \
--format=json

Step 4: Use Trace IDs for Detailed Investigation

When you find a specific request in HTTP logs, use the trace ID to get all related logs:

# Get all logs for a specific trace
gcloud logging read \
"trace=\"projects/PROJECT_ID/traces/TRACE_ID\"" \
--project=PROJECT_ID \
--limit=100 \
--format=json

Example:

gcloud logging read \
"trace=\"projects/toto-bo-stg/traces/77a50ee876336001e4ce52cecdc4e8ab\"" \
--project=toto-bo-stg \
--limit=100 \
--format=json

Common Log Patterns

1. Secret/Environment Variable Configuration

Log Pattern:

  • Failed to parse private key
  • Invalid PEM formatted message
  • Permission denied
  • Secret not found

Troubleshooting:

# Check for secret-related errors
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND (textPayload:\"secret\" OR textPayload:\"Secret\" OR textPayload:\"private key\" OR textPayload:\"PEM\")" \
--project=PROJECT_ID \
--limit=20 \
--format=json

Solution: Verify secret format and access permissions. For PEM keys, ensure they have proper headers and newlines.

2. Database Connection Errors

Error Pattern:

  • Firestore permission denied
  • Failed to initialize Firebase Admin
  • Database connection error

Troubleshooting:

# Check for Firebase/database errors
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND (textPayload:\"Firebase\" OR textPayload:\"Firestore\" OR textPayload:\"database\")" \
--project=PROJECT_ID \
--limit=20 \
--format=json

3. Application Crashes

Error Pattern:

  • Unhandled exception
  • Cannot find module
  • TypeError
  • ReferenceError

Troubleshooting:

# Get full error stack traces
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND logName:\"stderr\"" \
--project=PROJECT_ID \
--limit=50 \
--format=json

4. Memory/Performance Issues

Error Pattern:

  • Out of memory
  • Request timeout
  • Function execution time exceeded

Troubleshooting:

# Check for performance-related errors
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND (textPayload:\"timeout\" OR textPayload:\"memory\" OR textPayload:\"exceeded\")" \
--project=PROJECT_ID \
--limit=20 \
--format=json

Advanced Filtering Techniques

Time-Based Filtering

# Get logs from last hour
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND timestamp>=\"$(date -u -d '1 hour ago' +%Y-%m-%dT%H:%M:%SZ)\"" \
--project=PROJECT_ID \
--limit=50

PowerShell:

$oneHourAgo = (Get-Date).AddHours(-1).ToUniversalTime().ToString("yyyy-MM-ddTHH:mm:ssZ")
gcloud logging read `
"resource.type=cloud_run_revision `
AND resource.labels.service_name=`"toto-bo-stg-backend`" `
AND timestamp>=`"$oneHourAgo`"" `
--project=toto-bo-stg `
--limit=50

Severity-Based Filtering

# Get only ERROR and CRITICAL logs
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND severity>=ERROR" \
--project=PROJECT_ID \
--limit=50

Combining Multiple Filters

# Get errors from specific time range with specific text
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND resource.labels.location=\"us-central1\" \
AND severity>=ERROR \
AND textPayload:\"Firebase\"" \
--project=PROJECT_ID \
--limit=30 \
--format=json

Output Formats

Best for programmatic processing and detailed analysis:

gcloud logging read "..." --format=json --project=PROJECT_ID

Table Format

Good for quick visual inspection:

gcloud logging read "..." --format="table(timestamp,severity,textPayload)" --project=PROJECT_ID

Value Format

Extract specific fields:

gcloud logging read "..." --format="value(textPayload,jsonPayload.message)" --project=PROJECT_ID

Practical Examples

Example 1: Investigating HTTP 500 Errors

# 1. Find failing requests
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"toto-bo-stg-backend\" \
AND httpRequest.status=500" \
--project=toto-bo-stg \
--limit=5 \
--format=json

# 2. Get the trace ID from the output, then:
gcloud logging read \
"trace=\"projects/toto-bo-stg/traces/TRACE_ID_FROM_STEP_1\"" \
--project=toto-bo-stg \
--limit=100 \
--format=json

Example 2: Finding Secret Format Errors

# Search for PEM/private key errors
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"toto-bo-stg-backend\" \
AND resource.labels.location=\"us-central1\" \
AND (textPayload:\"PEM\" OR textPayload:\"private key\" OR textPayload:\"Invalid\")" \
--project=toto-bo-stg \
--limit=20 \
--format=json

Example 3: Monitoring Recent Errors

# Get all errors from the last 30 minutes
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"toto-bo-stg-backend\" \
AND severity>=ERROR" \
--project=toto-bo-stg \
--limit=50 \
--format=json \
--freshness=30m

Common Issues and Solutions

Issue: "Invalid PEM formatted message"

Cause: Private key in secret is not properly formatted.

Solution:

  1. Ensure the key has PEM headers: -----BEGIN PRIVATE KEY----- and -----END PRIVATE KEY-----
  2. Convert \n escape sequences to actual newlines
  3. Verify the key doesn't have extra characters at the beginning/end

Fix Command:

# Format and update the secret
$keyContent = "YOUR_KEY_CONTENT"
$pemKey = "-----BEGIN PRIVATE KEY-----`n" + ($keyContent -replace '\\n', "`n") + "`n-----END PRIVATE KEY-----"
$tempFile = [System.IO.Path]::GetTempFileName()
$pemKey | Out-File -FilePath $tempFile -Encoding utf8 -NoNewline
gcloud secrets versions add SECRET_NAME --data-file=$tempFile --project=PROJECT_ID
Remove-Item $tempFile

Issue: Can't see error messages in logs

Cause: Errors might be in stderr, not stdout.

Solution:

# Check both stdout and stderr
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND (logName:\"stdout\" OR logName:\"stderr\")" \
--project=PROJECT_ID \
--limit=50

Issue: Too many logs to parse

Solution: Use more specific filters:

# Filter by severity first
gcloud logging read "..." AND severity>=ERROR

# Then filter by time
gcloud logging read "..." AND timestamp>="..."

# Then filter by text
gcloud logging read "..." AND textPayload:"specific error"

Best Practices

  1. Start with HTTP errors - Check httpRequest.status>=500 to identify failing endpoints
  2. Use trace IDs - When you find an error, use its trace ID to get all related logs
  3. Check both stdout and stderr - Application errors can appear in either stream
  4. Filter by severity - Use severity>=ERROR to focus on problems
  5. Use JSON format - Easier to parse and extract specific fields
  6. Time-based filtering - Narrow down to relevant timeframes
  7. Document findings - Keep notes on common errors and their solutions

Quick Reference

Project IDs

  • toto-bo-stg: Staging backoffice
  • toto-bo: Production backoffice
  • toto-f9d2f-stg: Staging app
  • toto-f9d2f: Production app

Common Service Names

  • toto-bo-stg-backend
  • toto-bo-backend
  • toto-f9d2f-stg-backend
  • toto-f9d2f-backend

Region

  • All services use us-central1

Essential Commands

# List services
gcloud run services list --project=PROJECT_ID

# Get recent errors
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND severity>=ERROR" \
--project=PROJECT_ID \
--limit=20

# Get application logs
gcloud logging read \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND (logName:\"stdout\" OR logName:\"stderr\")" \
--project=PROJECT_ID \
--limit=50


💡 Pro Tip: Keep a terminal window open with a filtered log stream for real-time monitoring:

# Stream errors in real-time
gcloud logging tail \
"resource.type=cloud_run_revision \
AND resource.labels.service_name=\"SERVICE_NAME\" \
AND severity>=ERROR" \
--project=PROJECT_ID