Google Cloud Platform has become a major player in cloud computing, particularly for organizations leveraging data analytics, machine learning, and Kubernetes. GCP interviews test your understanding of core services and how they fit together—not just whether you've clicked through the console.
This guide covers the essential GCP services and concepts that come up in DevOps, SRE, and cloud engineering interviews.
GCP Fundamentals
Before diving into services, understand GCP's organizational structure.
Resource hierarchy:
Organization
└── Folders (optional)
└── Projects
└── Resources (VMs, buckets, etc.)
Key concepts:
- Projects are the fundamental organizing unit—billing, APIs, and IAM are managed per project
- Folders group projects for organizational policies
- Labels are key-value pairs for resource organization and cost allocation
- Regions and zones determine where resources are deployed
Example question: "How does GCP's resource hierarchy differ from AWS?"
In AWS, accounts are the primary boundary. GCP uses projects within an organization, with folders providing additional grouping. This makes GCP's multi-project architectures more straightforward for large organizations—a single organization can have thousands of projects with centralized IAM and billing.
Compute Services
Compute Engine (IaaS)
Virtual machines in GCP. Comparable to AWS EC2.
Key concepts:
- Machine types: Predefined (e2, n2, c2) or custom configurations
- Preemptible VMs: Up to 80% cheaper, can be terminated anytime (max 24 hours)
- Spot VMs: Similar to preemptible but without 24-hour limit
- Sole-tenant nodes: Dedicated physical servers for compliance requirements
- Live migration: VMs migrate during maintenance without downtime
Example question: "When would you use preemptible VMs?"
# Create a preemptible instance
gcloud compute instances create batch-worker \
--machine-type=n2-standard-4 \
--preemptible \
--no-restart-on-failure \
--maintenance-policy=terminate
# Use cases:
# - Batch processing jobs
# - CI/CD build agents
# - Fault-tolerant distributed workloads
# - Development/testing environmentsPreemptible VMs suit workloads that can handle interruption—batch jobs, rendering, data processing. Don't use them for databases or user-facing services.
Cloud Run (Serverless Containers)
Fully managed container platform. Run any containerized application without managing infrastructure.
Key concepts:
- Container-based: Any language, any runtime
- Scale to zero: Pay only when handling requests
- Concurrency: Single instance handles multiple requests (configurable)
- Cold starts: First request may have latency as container spins up
- Cloud Run Jobs: For batch workloads (no incoming requests)
Example question: "Design a deployment for a REST API that has unpredictable traffic."
# Service configuration
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: api-service
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1" # Avoid cold starts
autoscaling.knative.dev/maxScale: "100"
spec:
containerConcurrency: 80 # Requests per instance
containers:
- image: gcr.io/my-project/api:latest
resources:
limits:
cpu: "2"
memory: "1Gi"Cloud Run is ideal here—scales automatically, you pay per request, and the concurrency model handles traffic spikes efficiently.
Cloud Functions (FaaS)
Event-driven serverless functions. Similar to AWS Lambda.
Key concepts:
- Triggers: HTTP, Pub/Sub, Cloud Storage, Firestore, etc.
- Runtimes: Node.js, Python, Go, Java, .NET, Ruby
- Gen 1 vs Gen 2: Gen 2 built on Cloud Run, supports longer timeouts and concurrency
- Cold starts: More significant than Cloud Run for infrequent invocations
Example question: "Process uploaded images automatically."
// Cloud Function triggered by Cloud Storage
const sharp = require('sharp');
const { Storage } = require('@google-cloud/storage');
exports.processImage = async (event, context) => {
const storage = new Storage();
const bucket = storage.bucket(event.bucket);
const file = bucket.file(event.name);
// Skip if already processed
if (event.name.startsWith('processed/')) return;
const [buffer] = await file.download();
const processed = await sharp(buffer)
.resize(800, 600)
.jpeg({ quality: 80 })
.toBuffer();
await bucket.file(`processed/${event.name}`).save(processed);
console.log(`Processed: ${event.name}`);
};Containers & Kubernetes
Google Kubernetes Engine (GKE)
Managed Kubernetes. GCP's flagship container orchestration service.
Key concepts:
- Standard mode: You manage nodes, full control
- Autopilot mode: Google manages everything, pay per pod
- Node pools: Groups of nodes with same configuration
- Workload Identity: Secure pod-to-GCP authentication
- GKE Enterprise: Multi-cluster management, service mesh
Example question: "Compare GKE Autopilot vs Standard mode."
| Aspect | Standard | Autopilot |
|---|---|---|
| Node management | You manage | Google manages |
| Pricing | Per node | Per pod resources |
| Customization | Full control | Limited |
| Security | You configure | Hardened by default |
| Best for | Complex workloads | Simplified operations |
# Create Autopilot cluster
gcloud container clusters create-auto my-cluster \
--region=us-central1
# Create Standard cluster
gcloud container clusters create my-cluster \
--region=us-central1 \
--num-nodes=3 \
--machine-type=e2-standard-4Workload Identity setup:
# Kubernetes ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-app
annotations:
iam.gke.io/gcp-service-account: my-app@my-project.iam.gserviceaccount.com
---
# Pod using the ServiceAccount
apiVersion: v1
kind: Pod
metadata:
name: my-app
spec:
serviceAccountName: my-app
containers:
- name: app
image: gcr.io/my-project/my-appThen bind the Kubernetes SA to the GCP SA:
gcloud iam service-accounts add-iam-policy-binding \
my-app@my-project.iam.gserviceaccount.com \
--role=roles/iam.workloadIdentityUser \
--member="serviceAccount:my-project.svc.id.goog[default/my-app]"Artifact Registry
Container and package repository. Replacement for Container Registry.
Key concepts:
- Supports Docker, Maven, npm, Python, Go, Apt, Yum
- Regional or multi-regional
- Vulnerability scanning integrated
- IAM-based access control
# Create Docker repository
gcloud artifacts repositories create my-repo \
--repository-format=docker \
--location=us-central1
# Configure Docker authentication
gcloud auth configure-docker us-central1-docker.pkg.dev
# Push image
docker push us-central1-docker.pkg.dev/my-project/my-repo/my-image:tagIdentity & Access Management
IAM Fundamentals
GCP IAM is based on the principle: Who (identity) can do What (role) on Which resource.
Key concepts:
- Principals: Users, service accounts, groups, domains
- Roles: Collections of permissions (predefined, custom, basic)
- Policies: Bind principals to roles on resources
- Service accounts: Identities for applications and services
Example question: "What's the difference between primitive, predefined, and custom roles?"
Primitive (Basic) roles - legacy, broad permissions:
- roles/viewer: Read-only access
- roles/editor: Read + write access
- roles/owner: Full access + IAM management
Predefined roles - granular, service-specific:
- roles/storage.objectViewer: Read GCS objects
- roles/compute.instanceAdmin: Manage VMs
- roles/bigquery.dataEditor: Edit BigQuery datasets
Custom roles - your own permission sets:
- Created from specific permissions
- Useful when predefined roles are too broad
IAM policy example:
# Grant Storage Object Viewer to a service account
gcloud projects add-iam-policy-binding my-project \
--member="serviceAccount:my-app@my-project.iam.gserviceaccount.com" \
--role="roles/storage.objectViewer"
# Grant at bucket level (more restrictive)
gcloud storage buckets add-iam-policy-binding gs://my-bucket \
--member="serviceAccount:my-app@my-project.iam.gserviceaccount.com" \
--role="roles/storage.objectViewer"Service Accounts
Non-human identities for applications.
Key concepts:
- User-managed: You create and manage
- Default service accounts: Auto-created, often overprivileged
- Service account keys: JSON files for external authentication (avoid when possible)
- Impersonation: Act as another service account without keys
Best practice: Avoid service account keys. Use Workload Identity for GKE, instance metadata for Compute Engine.
# Create service account
gcloud iam service-accounts create my-app \
--display-name="My Application"
# Grant minimal permissions
gcloud projects add-iam-policy-binding my-project \
--member="serviceAccount:my-app@my-project.iam.gserviceaccount.com" \
--role="roles/storage.objectViewer"
# Impersonation (for local development)
gcloud auth application-default login --impersonate-service-account=my-app@my-project.iam.gserviceaccount.comNetworking
VPC Networks
Virtual networks in GCP. Global by default (unlike AWS).
Key concepts:
- Global VPCs: Subnets are regional, but VPC spans all regions
- Auto mode: Pre-created subnets in each region
- Custom mode: You create subnets explicitly
- Private Google Access: Access GCP APIs without public IPs
- VPC Peering: Connect VPCs (no transitive peering)
Example question: "Design a network for a multi-region application."
# Create custom VPC
gcloud compute networks create my-vpc --subnet-mode=custom
# Create subnets in different regions
gcloud compute networks subnets create us-subnet \
--network=my-vpc \
--region=us-central1 \
--range=10.0.1.0/24 \
--enable-private-ip-google-access
gcloud compute networks subnets create eu-subnet \
--network=my-vpc \
--region=europe-west1 \
--range=10.0.2.0/24 \
--enable-private-ip-google-accessGCP's global VPC means instances in different regions can communicate directly—no peering needed within the same VPC.
Firewall Rules
Network access control at the VPC level.
Key concepts:
- Network tags: Target instances by tag, not IP
- Service accounts: Target by service account identity
- Priority: Lower number = higher priority (0-65535)
- Implied rules: Deny all ingress, allow all egress by default
# Allow HTTP to instances tagged 'web'
gcloud compute firewall-rules create allow-http \
--network=my-vpc \
--allow=tcp:80 \
--target-tags=web \
--source-ranges=0.0.0.0/0
# Allow internal communication
gcloud compute firewall-rules create allow-internal \
--network=my-vpc \
--allow=tcp,udp,icmp \
--source-ranges=10.0.0.0/8Cloud Load Balancing
Global and regional load balancing options.
Key types:
- Global HTTP(S): Layer 7, single anycast IP, SSL termination
- Global TCP/SSL Proxy: Layer 4 for non-HTTP traffic
- Regional: Network load balancer, internal load balancer
- Cloud CDN: Integrated with Global HTTP(S) LB
Example question: "What load balancer for a global web application?"
Global HTTP(S) Load Balancer:
- Single anycast IP address
- Routes to nearest healthy backend
- SSL termination at edge
- Cloud CDN integration
- Cloud Armor for DDoS protection
# Create backend service
gcloud compute backend-services create web-backend \
--global \
--protocol=HTTP \
--health-checks=http-health-check
# Create URL map
gcloud compute url-maps create web-map \
--default-service=web-backend
# Create HTTPS proxy
gcloud compute target-https-proxies create web-proxy \
--url-map=web-map \
--ssl-certificates=my-cert
# Create forwarding rule (the actual IP)
gcloud compute forwarding-rules create web-rule \
--global \
--target-https-proxy=web-proxy \
--ports=443Data & Analytics
BigQuery
Serverless data warehouse. GCP's flagship analytics service.
Key concepts:
- Serverless: No infrastructure to manage
- Columnar storage: Optimized for analytics queries
- Standard SQL: ANSI-compliant SQL syntax
- Partitioning: Divide tables by date or integer range
- Clustering: Sort data within partitions by columns
- Materialized views: Pre-computed query results
Example question: "How do you optimize BigQuery costs?"
-- 1. Use partitioned tables
CREATE TABLE my_dataset.events
PARTITION BY DATE(event_timestamp)
CLUSTER BY user_id, event_type
AS SELECT * FROM raw_events;
-- 2. Select only needed columns (avoid SELECT *)
SELECT user_id, event_type, COUNT(*)
FROM my_dataset.events
WHERE DATE(event_timestamp) = '2026-01-07' -- Uses partition
GROUP BY user_id, event_type;
-- 3. Use approximate functions for large datasets
SELECT APPROX_COUNT_DISTINCT(user_id) as unique_users
FROM my_dataset.events;
-- 4. Preview query cost before running
-- Click "More" > "Query settings" > Check "Use cached results"Cost optimization strategies:
- Partition tables by date (query only relevant partitions)
- Cluster by frequently filtered columns
- Use
--dry_runto check bytes scanned before querying - Set up cost controls and quotas
- Use BI Engine for interactive dashboards
Cloud Storage
Object storage. Comparable to AWS S3.
Storage classes:
| Class | Use Case | Min Duration | Retrieval Cost |
|---|---|---|---|
| Standard | Frequently accessed | None | Free |
| Nearline | Monthly access | 30 days | $0.01/GB |
| Coldline | Quarterly access | 90 days | $0.02/GB |
| Archive | Yearly access | 365 days | $0.05/GB |
# Create bucket with lifecycle policy
gcloud storage buckets create gs://my-bucket \
--location=us-central1 \
--default-storage-class=standard
# Set lifecycle policy (JSON file)
cat > lifecycle.json << EOF
{
"rule": [
{
"action": {"type": "SetStorageClass", "storageClass": "NEARLINE"},
"condition": {"age": 30}
},
{
"action": {"type": "SetStorageClass", "storageClass": "COLDLINE"},
"condition": {"age": 90}
},
{
"action": {"type": "Delete"},
"condition": {"age": 365}
}
]
}
EOF
gcloud storage buckets update gs://my-bucket --lifecycle-file=lifecycle.jsonPub/Sub
Messaging service for async communication.
Key concepts:
- Topics: Channels for publishing messages
- Subscriptions: Pull or push delivery
- At-least-once delivery: Messages may be delivered multiple times
- Message ordering: Optional, within ordering key
- Dead letter topics: For failed message handling
# Publisher
from google.cloud import pubsub_v1
publisher = pubsub_v1.PublisherClient()
topic_path = publisher.topic_path('my-project', 'my-topic')
data = '{"event": "user_signup", "user_id": "123"}'
future = publisher.publish(topic_path, data.encode('utf-8'))
print(f'Published message ID: {future.result()}')
# Subscriber
from google.cloud import pubsub_v1
subscriber = pubsub_v1.SubscriberClient()
subscription_path = subscriber.subscription_path('my-project', 'my-subscription')
def callback(message):
print(f'Received: {message.data}')
message.ack()
streaming_pull_future = subscriber.subscribe(subscription_path, callback=callback)Quick Reference: GCP vs AWS
| GCP Service | AWS Equivalent | Notes |
|---|---|---|
| Compute Engine | EC2 | VMs |
| Cloud Run | App Runner / Fargate | Serverless containers |
| Cloud Functions | Lambda | FaaS |
| GKE | EKS | Managed Kubernetes |
| Cloud Storage | S3 | Object storage |
| BigQuery | Redshift/Athena | Serverless data warehouse |
| Pub/Sub | SNS + SQS | Messaging |
| Cloud SQL | RDS | Managed databases |
| Spanner | Aurora Global | Global distributed DB |
| VPC | VPC | Networking (GCP is global) |
| IAM | IAM | Identity management |
| Cloud Logging | CloudWatch Logs | Logging |
| Cloud Monitoring | CloudWatch | Metrics and alerting |
Common Interview Scenarios
Scenario 1: Migrate a web application to GCP
Good answer structure:
- Assess current architecture and dependencies
- Choose compute platform (Compute Engine for lift-and-shift, Cloud Run for containerized)
- Plan data migration (Cloud Storage, Cloud SQL)
- Set up networking (VPC, load balancer)
- Implement security (IAM, firewall rules)
- Configure monitoring and logging
Scenario 2: Design for high availability
Key points:
- Use regional resources (Cloud Run, regional GKE clusters)
- Multiple zones within region
- Global load balancer for multi-region
- Cloud SQL with high availability (regional)
- Cloud Storage is regional or multi-regional by default
Scenario 3: Optimize costs
Strategies:
- Right-size instances (recommender API)
- Use preemptible/spot VMs for batch workloads
- Committed use discounts for predictable workloads
- Storage class lifecycle policies
- BigQuery partitioning and cost controls
Related Articles
This guide connects to the broader DevOps interview preparation:
Container Orchestration:
- Kubernetes Interview Guide - Core concepts that apply to GKE
- Docker Interview Guide - Container fundamentals
Other Cloud Platforms:
- AWS Interview Guide - Compare services
- Azure Interview Guide - Microsoft's cloud platform
DevOps Fundamentals:
- CI/CD & GitHub Actions Interview Guide - Deploy to GCP
- Linux Commands Interview Guide - gcloud CLI foundations
Architecture:
- System Design Interview Guide - Design patterns using GCP
Final Thoughts
GCP interviews focus on understanding services and their trade-offs, not memorizing console clicks. Key areas:
- Compute choices: When to use Compute Engine vs Cloud Run vs Cloud Functions vs GKE
- GKE depth: Autopilot vs Standard, Workload Identity, node pools
- BigQuery: Partitioning, clustering, cost optimization
- IAM: Service accounts, least privilege, avoiding key files
- Networking: Global VPCs, firewall rules with tags
Practice with the gcloud CLI—many interviews include hands-on components where you'll need to create resources or debug issues.
