Module 5: Managing Operations
Looking for ‘Preparing for Your Professional Cloud Security Engineer Journey Module 5 Answers’?
In this post, I provide complete, accurate, and detailed explanations for the answers to Module 5: Managing Operations of Course 1: Preparing for Your Professional Cloud Security Engineer Journey – Preparing for Google Cloud Certification: Cloud Security Engineer Professional Certificate.
Whether you’re preparing for quizzes or brushing up on your knowledge, these insights will help you master the concepts effectively. Let’s dive into the correct answers and detailed explanations for each question!
Diagnostic questions
Practice Assignment
1. Cymbal Bank needs a secure, compliant DevSecOps solution on Google Cloud for vulnerability scanning, granular access control, and cryptographic key management. They also require dynamic artifact analysis triggered by metadata changes. Which Google Cloud services best meet these stringent requirements?
- Cloud Storage with Object Lifecycle Management for artifact versioning, integrated with Cloud Functions triggered by object creation events for ad-hoc vulnerability checks, and secured with IAM roles based on least privilege.
- Artifact Registry with Artifact Analysis, leveraging IAM Conditions for fine-grained access control and Cloud KMS for Customer-Managed Encryption Keys, integrated with Cloud Build triggers based on Pub/Sub notifications from Artifact Registry. ✅
- Custom artifact storage on Compute Engine with Binary Authorization for policy enforcement, combined with independent vulnerability scanning tools deployed on Google Kubernetes Engine, and integrated with Cloud Logging for audit trails.
- Compute Engine instances running custom vulnerability scanning scripts, secured with VPC Service Controls, and managed using Cloud Deployment Manager with Terraform modules for infrastructure as code.
Explanation:
Best fits the needs for vulnerability scanning, fine-grained IAM, cryptographic key management, and metadata-driven triggers.
2. Cymbal Bank’s management is concerned about virtual machines being compromised by bad actors. More specifically, they want to receive immediate alerts if there have been changes to the boot sequence of any of their Compute Engine instances. What should you do?
- Set Cloud Logging measurement policies on the VMs. Use Cloud Logging to place alerts whenever actualMeasurements and policyMeasurements don’t match.
- Set an organization-level policy that requires all Compute Engine VMs to be configured as Shielded VMs. Use Measured Boot enabled with Virtual Trusted Platform Module (vTPM). Validate integrity events in Cloud Monitoring and place alerts on late boot validation events. ✅
- Set project-level policies that require all Compute Engine VMs to be configured as Shielded VMs. Use Measured Boot enabled with Virtual Trusted Platform Module (vTPM). Validate integrity events in Cloud Monitoring and place alerts on late boot validation events.
- Set an organization-level policy that requires all Compute Engine VMs to be configured as Shielded VMs. Use Secure Boot enabled with Unified Extensible Firmware Interface (UEFI). Validate integrity events in Cloud Monitoring and place alerts on launch attestation events.
Explanation:
Measured Boot + vTPM with Shielded VMs + Cloud Monitoring covers boot integrity concerns and alerting.
3. Cymbal Bank runs a Node.js application on a Compute Engine instance. Cymbal Bank needs to share this base image with a ‘development’ Google Group. This base image should support secure boot for the Compute Engine instances deployed from this image. How would you automate the image creation?
- Start the Compute Engine instance. Set up certificates for secure boot. Prepare a cloudbuild.yaml configuration file. Specify the persistent disk location of the Compute Engine and the ‘development’ group. Use the command gcloud builds submit –tag, and specify the configuration file path and the certificates.
- Stop the Compute Engine instance. Set up Measured Boot for secure boot. Prepare a cloudbuild.yaml configuration file. Specify the persistent disk location of the Compute Engine instance and the ‘development’ group. Use the command gcloud builds submit –tag, and specify the configuration file path.
- Prepare a shell script. Add the command gcloud compute instances start to the script to start the Node.js Compute Engine instance. Set up Measured Boot for secure boot. Add gcloud compute images create, and specify the persistent disk and zone of the Compute Engine instance.
- Prepare a shell script. Add the command gcloud compute instances stop with the Node.js instance name. Set up certificates for secure boot. Add gcloud compute images create, and specify the Compute Engine instance’s persistent disk and zone and the certificate files. Add gcloud compute images add-iam-policy-binding and specify the ‘development’ group. ✅
Explanation:
Proper automation, sharing, and secure boot configuration.
4. Cymbal Bank uses Docker containers to interact with APIs for its personal banking application. These APIs are under PCI-DSS compliance. The Kubernetes environment running the containers will not have internet access to download required packages. How would you automate the pipeline that is building these containers?
- Create a Dockerfile with a container definition and a Cloud Build configuration file. Use the Cloud Build configuration file to build and deploy the image from Dockerfile to Artifact Registry. In the configuration file, include the Artifact Registry path and the Google Kubernetes Engine cluster. Upload the configuration file to a Git repository. ✅
- Create a trigger in Cloud Build to automate the deployment using the Git repository. Create a Dockerfile with container definition and cloudbuild.yaml file. Use Cloud Build to build the image from Dockerfile. Upload the built image to Artifact Registry and Dockerfile to a Git repository. In the cloudbuild.yaml template, include attributes to tag the Git repository path with a Google Kubernetes Engine cluster. Create a trigger in Cloud Build to automate the deployment using the Git repository.
- Build an immutable image. Store all artifacts and a Packer definition template in a Git repository. Use Artifact Registry to build the artifacts and Packer definition. Use Cloud Build to extract the built container and deploy it to a Google Kubernetes Engine Cluster (GKE). Add the required users and groups to the GKE project.
- Build a foundation image. Store all artifacts and a Packer definition template in a Git repository. Use Artifact Registry to build the artifacts and Packer definition. Use Cloud Build to extract the built container and deploy it to a Google Kubernetes Engine (GKE) cluster. Add the required users and groups to the GKE project.
Explanation:
Offline environment supported by automation and Artifact Registry, with a Git trigger-based build pipeline.
5. Cymbal Bank has Docker applications deployed in Google Kubernetes Engine. The bank has no offline containers. This GKE cluster is exposed to the public internet and has recently recovered from an attack. Cymbal Bank suspects that someone in the organization changed the firewall rules and has tasked you to analyze and find all details related to the firewall for the cluster. You want the most cost-effective solution for this task. What should you do?
- View the GKE logs in Cloud Logging. Use the log scoping tool to filter the Firewall Rules log. Create a Pub/Sub topic. Export the logs to a Pub/Sub topic using the command gcloud logging sinks create. Use Dataflow to read from Pub/Sub and query the stream.
- View the GKE logs in the local GKE cluster. Use the kubectl Sysdig Capture tool to filter the Firewall Rules log. Create a Pub/Sub topic. Export these logs to a Pub/Sub topic using the GKE cluster. Use Dataflow to read from Pub/Sub and query the stream.
- View the GKE logs in the local GKE cluster. Use Docker-explorer to explore the Docker file system. Filter and export the Firewall logs to Cloud Logging. Create a dataset in BigQuery to accept the logs. Use the command gcloud logging sinks create to export the logs to a BigQuery dataset. Query this dataset.
- View the GKE logs in Cloud Logging. Use the log scoping tool to filter the Firewall Rules log. Create a dataset in BigQuery to accept the logs. Export the logs to BigQuery using the command gcloud logging sinks create. Query this dataset. ✅
Explanation:
Most cost-effective and straightforward method for persistent analysis.
6. Cymbal Bank experienced a recent security issue. A rogue employee with admin permissions for Compute Engine assigned existing Compute Engine users some arbitrary permissions. You are tasked with finding all these arbitrary permissions. What should you do to find these permissions most efficiently?
- Use Event Threat Detection and trigger the IAM Anomalous grants detector. Publish results to the Security Command Center. In the Security Command Center, select Event Threat Detection as the source, filter by category: iam, and sort to find the attack time window. Click on Persistence: IAM Anomalous Grant to display Finding Details. View the Source property of the Finding Details section. ✅
- Use Event Threat Detection and trigger the IAM Anomalous Grant detector. Publish results to Cloud Logging. In the Security Command Center, select Cloud Logging as the source, filter by category: anomalies, and sort to find the attack time window. Click on Persistence: IAM Anomalous Grant to display Finding Details. View the Source property of the Finding Details section.
- Use Event Threat Detection and configure Continuous Exports to filter and write only Firewall logs to the Security Command Center. In the Security Command Center, select Event Threat Detection as the source, filter by evasion: Iam, and sort to find the attack time window. Click on Persistence: IAM Anomalous Grant to display Finding Details. View the Source property of the Finding Details section.
- Use Event Threat Detection and configure Continuous Exports to filter and write only Firewall logs to the Security Command Center. In the Security Command Center, select Event Threat Detection as the source, filter by category: anomalies, and sort to find the attack time window. Click on Evasion: IAM Anomalous Grant to display Finding Details. View the Source property of the Finding Details section.
Explanation:
Direct way to catch anomalous IAM permission grants via built-in threat detection.
7. Cymbal Bank wants to use Cloud Storage and BigQuery to store safe deposit usage data. Cymbal Bank needs a cost-effective approach to auditing only Cloud Storage and BigQuery data access activities. How would you use Cloud Audit Logs to enable this analysis?
- Enable Data Access Logs for ADMIN_READ, DATA_READ, and DATA_WRITE for Cloud Storage. All Data Access Logs are enabled for BigQuery by default. ✅
- Enable Data Access Logs for ADMIN_READ, DATA_READ, and DATA_WRITE for BigQuery. All Data Access Logs are enabled for Cloud Storage by default.
- Enable Data Access Logs for ADMIN_READ, DATA_READ, and DATA_WRITE at the organization level.
- Enable Data Access Logs for ADMIN_READ, DATA_READ, and DATA_WRITE at the service level for BigQuery and Cloud Storage.
Explanation:
Cloud Storage requires explicit Data Access Logs activation. BigQuery already logs them.
8. Cymbal Bank has suffered a remote botnet attack on Compute Engine instances in an isolated project. The affected project now requires investigation by an external agency. An external agency requests that you provide all admin and system events to analyze in their local forensics tool. You want to use the most cost-effective solution to enable the external analysis. What should you do?
- Use Cloud Monitoring and Cloud Logging. Filter Cloud Monitoring to view only system and admin logs. Expand the system and admin logs in Cloud Logging. Use Pub/Sub to export the findings from Cloud Logging to the external agency’s forensics tool or storage.
- Use the Security Command Center. Select Cloud Logging as the source, and filter by category: Admin Activity and category: System Activity. View the Source property of the Finding Details section. Use Pub/Sub topics to export the findings to the external agency’s forensics tool.
- Use Event Threat Detection. Trigger the IAM Anomalous Grant detector to detect all admins and users with admin or system permissions. Export these logs to the Security Command Center. Give the external agency access to the Security Command Center.
- Use Cloud Audit Logs. Filter Admin Activity audit logs for only the affected project. Use a Pub/Sub topic to stream the logs from Cloud Audit Logs to the external agency’s forensics tool. ✅
Explanation:
Focuses on cost-effective, project-specific log export using native GCP tooling.
9. The loan application from Cymbal Bank’s lending department collects credit reports that contain credit payment information from customers. According to bank policy, the PDF reports are stored for six months in Cloud Storage, and access logs for the reports are stored for three years. You need to configure a cost-effective storage solution for the access logs. What should you do?
- Set up a logging export dataset in BigQuery to collect data from Cloud Logging and Cloud Monitoring. Create table expiry rules to delete logs after three years.
- Set up a logging export bucket in Cloud Storage to collect data from the Security Command Center. Configure object lifecycle management rules to delete logs after three years.
- Set up a logging export dataset in BigQuery to collect data from Cloud Logging and the Security Command Center. Create table expiry rules to delete logs after three years.
- Set up a logging export bucket in Cloud Storage to collect data from Cloud Audit Logs. Configure object lifecycle management rules to delete logs after three years. ✅
Explanation:
This is the most cost-effective solution because:
- Cloud Storage is cheaper than BigQuery for long-term log storage.
- Cloud Audit Logs are the relevant logs to monitor access to the PDF reports.
- You can configure lifecycle rules to automatically delete logs after 3 years, which meets the retention requirement.
10. Cymbal Bank uses Compute Engine instances for its APIs, and recently discovered bitcoin mining activities on some instances. The bank wants to detect all future mining attempts and notify the security team. The security team can view the Security Command Center and Cloud Audit Logs. How should you configure the detection and notification?
- Enable the VM Manager tools suite in the Security Command Center. Perform a scan of Compute Engine instances. Publish results to Cloud Audit Logging. Create an alert in Cloud Monitoring to send notifications of suspect activities.
- Enable Anomaly Detection in the Security Command Center. Create and configure a Pub/Sub topic and an email service. Create a Cloud Run function to send email notifications for suspect activities. Export findings to a Pub/Sub topic, and use them to invoke the Cloud Run function. ✅
- Enable the Web Security Scanner in the Security Command Center. Perform a scan of Compute Engine instances. Publish results to Cloud Audit Logging. Create an alert in Cloud Monitoring to send notifications for suspect activities.
- Use Event Threat Detection’s threat detectors. Export findings from ‘Suspicious account activity’ and ‘Anomalous IAM behavior’ detectors and publish them to a Pub/Sub topic. Create a Cloud Run function to send notifications of suspect activities. Use Pub/Sub notifications to invoke the Cloud Run function.
Explanation:
- Cryptomining detection is part of Anomaly Detection and Event Threat Detection under the Security Command Center Premium tier.
- The original answer was close, but it incorrectly focused on the wrong detectors.
- The correct method involves:
- Enabling Anomaly Detection, which includes cryptomining threat detection.
- Exporting findings to Pub/Sub.
- Using Cloud Run to process and notify the security team (via email or another alerting mechanism).
Knowledge Check
Graded Assignment
11. Which feature of Google Cloud will Cymbal Bank use to prevent unauthorized container images from being deployed into production environments?
- Audit logs
- Binary Authorization ✅
- Cloud Build
- Cloud Monitoring
Explanation:
Binary Authorization is a security feature that enforces deploy-time security policies to ensure only trusted container images are deployed to environments like GKE (Google Kubernetes Engine). It prevents unauthorized or unverified images from being used.
12. How will Cymbal Bank be able to determine who performed a particular administrative action and when?
- Audit logs ✅
- VPC flow logs
- VPC service controls
- Cloud Monitoring
Explanation:
Cloud Audit Logs provide a record of all admin and data access activities on Google Cloud resources, including who did what, and when. This is the go-to solution for traceability and accountability in administrative operations.
Related contents:
Module 2: Configuring Access
Module 3: Securing Communications and Establishing Boundary Protection
Module 4: Ensuring Data Protection
Module 6: Supporting Compliance Requirements
You might also like:
Course 2: Google Cloud Fundamentals: Core Infrastructure
Course 3: Networking in Google Cloud: Fundamentals
Course 4: Networking in Google Cloud: Routing and Addressing
Course 5: Networking in Google Cloud: Network Architecture
Course 6: Networking in Google Cloud: Network Security
Course 7: Networking in Google Cloud: Load Balancing
Course 8: Networking in Google Cloud: Hybrid and Multicloud
Course 9: Managing Security in Google Cloud
Course 10: Security Best Practices in Google Cloud
Course 11: Mitigating Security Vulnerabilities on Google Cloud
Course 12: Logging and Monitoring in Google Cloud
Course 13: Observability in Google Cloud
Course 14: Hands-On Labs in Google Cloud for Security Engineers