◆ NexGenCloud Inc.

Fictional SaaS company migrating to Google Cloud — IAM Playbook
Industry:B2B Analytics Platform GCP Org ID:nexgencloud.com (fictional) Teams:Platform Eng (10) | Data Science (6) | Security (3) | Dev (25) GCP Projects:~15 (shared infra, dev, staging, prod per service) Compliance:SOC 2, HIPAA (target)

🛠 Scenario

NexGenCloud is migrating their analytics platform from on-prem to GCP. You're the Platform Engineer setting up the IAM foundation. The CISO requires: least privilege, auditability, and data exfiltration protection.

📚 Pre-Requisites (Do Beforehand)

  • GCP Org — must have Organization node (not free trial)
  • Billing Account linked to the org
  • Org Admin role (roles/resourcemanager.organizationAdmin)
  • IAM Security Reviewer role for audits
  • Cloud Identity / Google Workspace domain verified

🎯 What You'll Build

  • SOP 1 Org Hierarchy + Folder structure
  • SOP 2 Custom IAM Roles (least privilege)
  • SOP 3 IAM Conditions (time/IP/resource)
  • SOP 4 Service Accounts + Workload Identity Fed
  • SOP 5 Deny Policies (break-glass prevention)
  • SOP 6 VPC Service Controls perimeter
  • SOP 7 Policy Analyzer + Audit Logging
  • SOP 8 Project Factory Pattern
1 Org Hierarchy & Folder Structure
Objective: Create folder hierarchy for env isolation. Duration: 15 min. Impact: All future projects inherit policies.

⚠ Prerequisites

  • GCP Organization node must exist — check at IAM & Admin / Settings
  • You need roles/resourcemanager.organizationAdmin or roles/resourcemanager.folderAdmin
  • Decide naming convention beforehand: nexgen-{env}-{purpose}

👉 Portal Steps

Step 1 — Navigate to Resource Manager
Go to IAM & Admin / Manage Resources
You'll see the Organization node at top: nexgencloud.com
Step 2 — Create Top-Level Folders
Click CREATE FOLDER → Name each:
nexgen-common — Shared infra (networking, security, CI/CD)
nexgen-dev — Development workloads
nexgen-staging — Pre-production validation
nexgen-prod — Production workloads
Step 3 — Create Sub-Folders (per team)
Inside each env folder, create team sub-folders:
nexgen-dev/platform, data-science, apis
nexgen-prod/platform, data-science, apis
Step 4 — Create Initial Projects
Click CREATE PROJECT under each sub-folder:
nexgen-shared-networking (under nexgen-common)
nexgen-dev-platform-gke (under nexgen-dev/platform)
nexgen-dev-ds-pipelines (under nexgen-dev/data-science)

✅ Verification

You should see this tree in Resource Manager:

nexgencloud.com (Organization) +-- nexgen-common (Folder) | +-- nexgen-shared-networking (Project) +-- nexgen-dev (Folder) | +-- platform (Folder) | | +-- nexgen-dev-platform-gke (Project) | +-- data-science (Folder) | +-- nexgen-dev-ds-pipelines (Project) +-- nexgen-staging (Folder) +-- nexgen-prod (Folder) +-- ...

⚠ Key Tip for Interview: Hierarchy matters because Org Policies and IAM roles inherit downward. Setting a policy at nexgen-prod folder applies to all projects inside it. Use this for policy isolation between environments.

2 Custom IAM Roles (Least Privilege)
Objective: Create two custom roles — SecurityViewer (read-only) and NetworkAdmin (scoped network mgmt). Duration: 20 min.

⚠ Prerequisites

  • Need roles/iam.roleAdmin at Organization level
  • Know exactly which permissions are needed (start broad, then narrow)
  • Use IAM & Admin / Roles to reference existing roles

👉 Portal Steps — Create nexgenSecurityViewer

Step 1
IAM & Admin / Roles / + CREATE ROLE
Step 2 — Configure Role
Title: NexGen Security Viewer
ID: nexgenSecurityViewer (auto-generated)
Description: Read-only access to security configurations, IAM, and audit logs
Role launch stage: General Availability
Add Permissions:
iam.roles.get, iam.roles.list
iam.serviceAccounts.getIamPolicy
resourcemanager.projects.getIamPolicy
logging.logEntries.list, logging.logs.list
orgpolicy.policies.list, orgpolicy.policy.get
asset.assets.list* (Cloud Asset Inventory)
Step 3 — Create
Click CREATE. Role is now available org-wide.

👉 Portal Steps — Create nexgenNetworkAdmin

Step 1
IAM & Admin / Roles / + CREATE ROLE
Step 2 — Key Permissions
Title: NexGen Network Admin
ID: nexgenNetworkAdmin
Permissions:
compute.networks.create, compute.networks.update, compute.networks.delete
compute.subnetworks.* (full control)
compute.firewalls.create, .update, .delete
compute.routes.*
dns.managedZones.*
compute.interconnects.* (if on-prem)

🎯 Assign SecurityViewer

  • Go to IAM & Admin / IAM / Organization node
  • Click + ADD → Add security team group: security@nexgencloud.com
  • Select role: Custom → NexGen Security Viewer
  • Click SAVE. Inherits to all child folders/projects.

🎯 Assign NetworkAdmin

  • Go to IAM & Admin / IAM at nexgen-common folder
  • Add network-team@nexgencloud.com with NexGen Network Admin
  • Scoped to shared-infra folder only — not dev/prod

✅ Verification

Go to IAM & Admin / Roles → Filter by "nexgen". You'll see both custom roles with permission counts. Assign a test user and verify they can only read IAM policies but cannot modify.

⚠ Interview Tip: Say: "Custom roles reduce blast radius. Instead of giving roles/editor (3000+ perms), we give exactly the perms needed. Audit with Policy Analyzer to find over-permissioned principals."

3 IAM Conditions
Objective: Add time-based, IP-based, and resource-tag conditions to existing role bindings. Duration: 15 min.

⚠ Prerequisites

  • Custom role from SOP 2 (or any existing role)
  • Resource Manager tags created (optional but recommended)
  • Understand CEL (Common Expression Language) syntax

👉 Scenario A — Time-Based Condition

"Developers can deploy to staging only during business hours (9AM-5PM Mon-Fri)."

Step 1
IAM & Admin / IAM → select nexgen-staging folder → find the developer group.
Step 2
Click the pencil icon → click on the role → ADD CONDITION.
Step 3 — Condition Configurator
Title: Business hours only
Condition type: TimeTemporal
Expression (CEL):
request.time.getHours("America/Chicago") >= 9 && request.time.getHours("America/Chicago") < 17 && request.time.getDayOfWeek("America/Chicago") >= 1 && request.time.getDayOfWeek("America/Chicago") <= 5
Step 4
Click SAVE. Condition appears as bindings[].condition in the policy JSON.

👉 Scenario B — Resource Tag Condition

"Only allow changes to resources tagged environment=prod if coming from the on-prem NAT IP."

Step 1 — Create Tag
IAM & Admin / Tags+ CREATE TAG
Key: environmentValues: dev, staging, prod, shared
Scope: Organization (so all projects can use it)
Step 2 — Tag a Resource
Go to a GCS bucket or Compute instance → Labels & Tags → attach environment=prod
Step 3 — IAM Condition with Tag + IP
Edit the role binding for the ops team at project level. Add condition:
resource.matchTag("nexgencloud.com/environment", "prod") && origin.ip in ["203.0.113.0/24"]
Step 4
SAVE. Now only on-prem IPs can modify prod-tagged resources.

✅ Verification

Use IAM & Admin / Policy Analyzer → query a principal to see their effective access with conditions applied. The condition status shows granted or not granted based on context.

4 Service Accounts & Workload Identity Federation
Objective: Create SA for GKE workloads and set up WIF (Workload Identity Federation) so GitHub Actions can deploy without static keys. Duration: 25 min.

⚠ Prerequisites

  • GKE cluster with Workload Identity enabled (--workload-pool=PROJECT.svc.id.goog)
  • GitHub repo with OIDC provider configured (or Azure DevOps)
  • Permissions: iam.serviceAccountAdmin, iam.workloadIdentityPoolAdmin

👉 Part A — Create & Bind GCP Service Account

Step 1
IAM & Admin / Service Accounts / + CREATE SERVICE ACCOUNT
Step 2 — Configure
Name: gke-microservice-sa
ID: gke-microservice-sa
Description: Service account for GKE microservices to read GCS and write logs
Roles (grant least privilege):
roles/storage.objectViewer (read GCS)
roles/logging.logWriter (write logs)
roles/monitoring.metricWriter (custom metrics)
Step 3 — Create Key (Optional, for non-WIF)
Click KEYS tab → ADD KEY → Create New Key → JSON (download, store securely)
⚠ Better: use WIF (next step) instead of keys!

👉 Part B — Workload Identity Federation (GitHub Actions)

Step 1 — Create Workload Identity Pool
IAM & Admin / Workload Identity Federation / + CREATE POOL
Name: github-pool
ID: github-pool
Step 2 — Add OIDC Provider
Inside pool → ADD PROVIDER:
Provider name: github-provider
Issuer URL: https://token.actions.githubusercontent.com
Audience: https://github.com/NexGenCloud
Attribute mapping:
google.subject = assertion.sub attribute.repository = assertion.repository attribute.ref = assertion.ref
Step 3 — Grant SA Access to Pool
In the pool → GRANT ACCESS → select SA gke-microservice-sa.
This creates a binding allowing the pool to impersonate that SA. Condition: limit to specific repo:
assertion.repository == "NexGenCloud/backend-api"

👉 Part C — K8s Workload Identity

Step 1
Create K8s SA in namespace:
kubectl create sa ksa-backend -n prod
Step 2
Bind K8s SA to GCP SA:
gcloud iam service-accounts add-iam-policy-binding \ gke-microservice-sa@PROJECT.iam.gserviceaccount.com \ --role roles/iam.workloadIdentityUser \ --member "serviceAccount:PROJECT.svc.id.goog[prod/ksa-backend]"
Step 3
Annotate K8s SA:
kubectl annotate sa ksa-backend \ iam.gke.io/gcp-service-account=gke-microservice-sa@PROJECT.iam.gserviceaccount.com

✅ Verification (Deploy a Pod)

# Test from inside pod: kubectl run test-pod --image=google/cloud-sdk:slim \ --serviceaccount=ksa-backend -it --rm -- \ gcloud auth list # You should see the GCP SA token automatically

No keys needed! Workload Identity exchanges K8s SA token for GCP SA token automatically.

5 Deny Policies (Explicit Deny Guardrails)
Objective: Set org-level deny policies to prevent public exposure of resources and unauthorized service account key creation. Duration: 15 min.

⚠ Prerequisites

  • Need roles/iam.denyAdmin at Organization level (separate from regular IAM admin)
  • Understand that deny always overrides allow
  • Test in a dev folder first (dry-run mindset)

👉 Deny Policy 1 — Prevent Public GCS Buckets

Step 1
IAM & Admin / Deny Policies / + CREATE DENY POLICY
Step 2 — Scope
Select the Organization node (nexgencloud.com) to apply globally.
Step 3 — Configure Deny
Policy ID: deny-public-gcs
Denial conditions: (leave blank to apply to all)
Principals: ALL (apply to everyone)
Permissions: search and add:
storage.buckets.setIamPolicy storage.buckets.setPublicAccess
Step 4
Under Exceptions → add your security admin group:
principalSet:group:security-admins@nexgencloud.com — this allows break-glass access.
Step 5
Click CREATE. Now no one (except security admins) can make a GCS bucket public.

👉 Deny Policy 2 — Block Service Account Key Creation

Step 1
IAM & Admin / Deny Policies / + CREATE DENY POLICY
Step 2
Scope: nexgen-prod folder only (dev/staging can still create keys for testing).
Step 3
Policy ID: deny-sa-key-creation-prod
Principals: ALL
Permissions:
iam.serviceAccounts.create iam.serviceAccounts.uploadKey iam.serviceAccountKeys.create
Step 4
CREATE. Prod now requires WIF — no static keys allowed.

✅ Verification

Try to make a GCS bucket public: Cloud Storage / Bucket / Permissions / + Add principal / allUsers / Storage Object Viewer. You'll see “Policy denied by org policy” error.

Check deny policy logs: Logging / Logs Explorer → query: protoPayload.metadata.denyPolicyName

6 VPC Service Controls Perimeter
Objective: Create a VPC SC perimeter around production projects to prevent data exfiltration. Duration: 30 min.

⚠ Prerequisites

  • Identify which projects to protect (prod projects with GCS, BQ, Bigtable)
  • Identify on-prem IP ranges and SA that need ingress/egress access
  • Permissions: accesscontextmanager.policyAdmin
  • Org ID — needed for the access policy name

👉 Portal Steps

Step 1 — Navigate
IAM & Admin / VPC Service Controls / + NEW PERIMETER
Step 2 — Basic Info
Title: nexgen-prod-perimeter
Type: Regular (standard) vs Bridge (for peering across perimeters)
Step 3 — Add Projects
Select all projects under the nexgen-prod folder:
nexgen-prod-platform-gke
nexgen-prod-ds-pipelines
nexgen-prod-apis
nexgen-prod-data-lake
Step 4 — Restricted Services
Select services to restrict (add all that hold sensitive data):
Cloud Storage BigQuery Bigtable Cloud Spanner Cloud SQL Dataflow
Step 5 — Ingress Rules (Allow Inbound)
+ ADD INGRESS RULE for on-prem access:
Source type: IP Subnet203.0.113.0/24 (on-prem)
Identity: serviceAccount:etl-sa@nexgen-prod-data-lake.iam.gserviceaccount.com
Services: Cloud Storage, BigQuery
Step 6 — Egress Rules (Allow Outbound)
+ ADD EGRESS RULE for monitoring & CI/CD:
Destination: EXTERNAL → specify monitoring project
Identity: serviceAccount:monitoring-sa@nexgen-common.iam...
Services: Cloud Monitoring, Cloud Logging
Step 7 — Dry Run First
Set mode to DRY RUN → monitor logs for violations → then switch to ENFORCED.
Check violations: VPC Service Controls / PERIMETER / Activity Logs

⚠ Critical: Always start in DRY RUN. A misconfigured perimeter can break all prod access. Monitor for 24-48 hours before enforcing.

✅ Verification

From a VM outside the perimeter: gsutil ls gs://nexgen-prod-bucket should fail with 403 VPC Service Controls.

From inside (VM in prod project or on-prem with ingress): should succeed.

7 Policy Analyzer & Audit Logging
Objective: Troubleshoot who has access to what, and set up audit logging for IAM changes. Duration: 20 min.

⚠ Prerequisites

  • Access to query: roles/iam.roleViewer + roles/cloudasset.viewer
  • Cloud Asset API enabled in at least one project

👉 Part A — Use Policy Analyzer

Step 1
IAM & Admin / Policy Analyzer
Step 2 — Query Example
Principal: dev-team@nexgencloud.com
Scope: nexgencloud.com (Organization)
Click ANALYZE
Result shows: all roles, resources, and conditions affecting this group. Green = access granted, Red = denied.
Step 3 — Advanced Query
Use Custom Query to find over-permissioned:
# Find all users with Editor role at org level SELECT * FROM cloud_asset_iam_policies WHERE roles_any("roles/editor") AND resource LIKE "//cloudresourcemanager.googleapis.com/organizations/%"

👉 Part B — Set Up Audit Logs for IAM Changes

Step 1
IAM & Admin / Audit Logs → click ADMIN ACTIVITY tab.
Admin Activity logs are always on (free, 400-day retention).
Step 2 — Enable Data Access Logs (for IAM)
Under DATA ACCESS tab → search for IAM:
IAM → check Admin Read + Data Access
⚠ Data Access logs are chargeable. Enable selectively.
Step 3 — Create Log Sink (Aggregation)
Logging / Logs Router / + CREATE SINK
Sink name: iam-audit-sink
Inclusion filter:
protoPayload.serviceName="iam.googleapis.com"
Destination: BigQuery dataset nexgen-audit-logs for querying with SQL.

✅ Verification

After making an IAM change, run this in Logging / Logs Explorer:

protoPayload.serviceName="iam.googleapis.com" protoPayload.methodName="SetIamPolicy" timestamp > "2026-05-14T00:00:00Z"

You'll see who changed what policy, when, and the diff.

8 Project Factory Pattern
Objective: Create a standardized project creation workflow using a "project factory" — base project + pre-configured IAM, network, and audit setup. Duration: 20 min (automated).

⚠ Prerequisites

  • Be comfortable with Terraform
  • Shared VPC host project already deployed
  • Service account for Terraform with roles/resourcemanager.projectCreator + billing permissions

👉 Portal Steps — Create the Pattern Manually (First Time)

Step 1 — Create a Baseline Project
IAM & Admin / Manage Resources / CREATE PROJECT
Name: nexgen-dev-backend-v2
Parent: nexgen-dev/apis folder
Billing account: Link to org billing
Step 2 — Enable APIs
APIs & Services / Enable APIs. Required for most projects:
compute.googleapis.com container.googleapis.com cloudresourcemanager.googleapis.com iam.googleapis.com logging.googleapis.com monitoring.googleapis.com
Step 3 — Attach Shared VPC
VPC Network / Shared VPC / Attach to project
Select the shared VPC host project and choose subnets (e.g. dev-backend-subnet).
Step 4 — Assign Baseline IAM
IAM → add:
dev-team@nexgencloud.comroles/container.developer (can deploy to GKE)
ci-cd-sa@nexgen-common.iam.gserviceaccount.comroles/container.developer
monitoring-sa@nexgen-common.iam.gserviceaccount.comroles/monitoring.metricWriter
Step 5 — Enable VPC SC Perimeter (prod only)
VPC Service Controls / Perimeter / Edit / + Add project
Add the new project to the existing nexgen-prod-perimeter.

👉 Automate with Terraform (The Real Way)

# project-factory/main.tf resource "google_project" "project" { name = var.project_name project_id = var.project_id folder_id = var.folder_id billing_account = var.billing_account } resource "google_project_service" "apis" { for_each = toset(var.enabled_apis) project = google_project.project.project_id service = each.key } resource "google_compute_shared_vpc_service_project" "attach" { count = var.attach_shared_vpc ? 1 : 0 host_project = var.host_project_id service_project = google_project.project.project_id } # Usage: # module "new_project" { # source = "./modules/project-factory" # project_name = "nexgen-prod-aiml-v2" # folder_id = "folders/PROD_FOLDER_ID" # billing_account = "BILLING_ID" # }

✅ Verification

After Terraform apply, go to Resource Manager and confirm the new project exists in the correct folder with APIs enabled. Check IAM to verify baseline roles are applied.