Notes: Hi all, Google Professional Cloud Architect Practice Exam Part 3 will familiarize you with types of questions you may encounter on the certification exam and help you determine your readiness or if you need more preparation and/or experience. Successful completion of the practice exam does not guarantee you will pass the certification exam as the actual exam is longer and covers a wider range of topics. We highly recommend you should take Google Professional Cloud Architect Actual Exam version because it includes real questions and highlighted answers are collected in our exam. It will help you pass the exam in an easier way.
1. Your company wants to migrate all on-premises applications to Google Cloud and plans to do this in several phases over the next two years. During this period, workloads split between the on-premises data centre and Google Cloud need to communicate with each other. Your GCP network has a single VPC. How should you set up the network to enable communication between applications running in Google cloud and on-premises data centre?
A. Configure both Primary and Secondary IP ranges of the VPC to not overlap with the on-premises VLAN.
B. Configure the VPC in GCP to use the same IP range as on-premises VLAN.
C. Configure the Secondary IP range of the VPC in GCP to use the same IP range as on-premises VLAN and use a non-overlapping range for the Primary range.
D. Configure the Primary IP range of the VPC in GCP to use the same IP range as on-premises VLAN and use a non-overlapping range for the Secondary range.
10. Your company offers online news subscriptions for customers and processes their payments in a PCI-DSS compliant application running on Google cloud. Local regulations require you to store these logs for one year. Your compliance team has asked you for your recommendation on removing payment card data before storing the logs. What should you do?
A. Use Cloud Function to modify the log entry to redact payment card data.
B. Use the SHA1 algorithm to hash the log.
C. Encrypt all data using a master key.
D. Use Cloud Data Loss Prevention (DLP) API.
11. Your on-premises data centre is at full capacity, but your company has forecasted considerable growth in Apache spark and Hadoop jobs. The existing jobs require lots of operational support, and your company doesn’t have resources, human resources or hardware, to accommodate the forecasted growth. You have been asked to recommend services in Google Cloud that your company could leverage to process these jobs while minimizing the cost, changes and operational overhead. Which GCP service(s) should you recommend?
A. Google Cloud Dataproc.
B. Google Kubernetes Engine (GKE).
C. Google Cloud Dataflow.
D. Google Compute Engine (GCE).
12. New releases of an application that you manage are taking longer than expected, and you want to speed up the deployments. The application is deployed on GKE using this Dockerfile:
FROM ubuntu:20.04
COPY - /src
RUN apt-get update Fy && apt-get upgrade -y
RUN apt-get install -y python python-pip python-dev
RUN pip install -r requirements_new.txt
What should you do? (Select 2 answers)
A. Update GKE node pool to use larger machine types.
B. Change the order of steps to copy source files after installing dependencies.
C. Add RUN apt-get remove python to the Dockerfile to remove python after running pip install.
D. Installing dependencies is slowing down the docker build. Remove all dependencies from requirements_new.bxt.
E. Use a smaller image like the Alpine version instead of the full Ubuntu image.
13. Your team developed a custom binary to install an application. Your change management team has asked you to install the binary after two weeks. You want to transfer the binary file to Cloud Shell instance today and access it in two weeks from any directory. Where should you store the binary file on Cloud Shell instance?
A. /opt
B. /usr/bin
C. SHOME/bin
D. /var/log
14. Your company owns a mission-critical application that is used extensively for handling online credit card payments. Your data centre is due for a hardware refresh, and the company has instead decided to migrate all applications to Google Cloud. Your compliance team requires the application to be compliant with PCI DSS regulations. You want to know if Google Kubernetes Engine is suitable for hosting your application. What should you do?
A. In addition to the Kubernetes Engine and GCP, which are PCI-DSS compliant, the application itself should also be compliant with the regulations.
B. Google Cloud Platform is PCI-DSS compliant; therefore, any applications hosted on it are automatically compliant.
C. Kubernetes Engine is multi-tenant and not PCI DSS compliant.
D. Migrate the solution to App Engine Standard, which is the only GCP compute platform certified for PCI DSS.
15. Your company plans to migrate several production applications to Google Cloud. You want to follow Google Recommended Practices to simplify IAM access controls for the whole organization. These are 20+ departments, each with different IAM requirements. You wish to set up these IAM controls centrally for ease of management. How should you design the resource hierarchy?
A. Set up a single organization with multiple folders, one for each department.
B. Set up multiple organizations, one organization for each department.
C. Set up multiple folders within multiple organizations, one organization for each department.
D. Set up a single organization with multiple projects, one for each department.
16. Your company is currently using a bespoke logging utility that gathers logs from all on-premises virtual machines and stores them in NAS. Your company plans to provision several new cloud-based products in Google Cloud, and the initial analysis has determined the existing logging utility is not compatible with some of the new cloud products. What should you do to ensure you can capture errors and view historical log data easily?
A. Review Application Logging best practices.
B. Review the logging requirements and use existing logging utility.
C. Patch the current logging utility to the latest version, and take advantage of new features.
D. Install Google Cloud logging agent on all VMs.
17. Your company has a hybrid network architecture with workloads running primarily off the on-premises data centre and GCP as the failover location. One of the applications uses an on-premises MySQL database, and you have set up a Cloud SQL replica to replicate MySQL database to Google Cloud platform. You have Cloud Monitoring Alarms to monitor replication lag (latency) and replication loss (packet loss) for the replication from on-premises MySQL database to Cloud SQL. Both alarms have recently triggered, and your operations team have asked you to fix the issue. What should you do? (Select Two)
A. When replication fails, restore Google Cloud SQL to a working backup.
B. Update MySQL replication to use UDP protocol.
C. Configure multiple VPN tunnels to take over if Dedicated Interconnect fails.
D. Set up Google Cloud Dedicated Interconnect between the on- premises network and the GCP network.
E. Send the replication logs to Cloud Storage, use Cloud Function to replicate them to the Cloud SQL replica.
18. Your company’s auditors carry out an annual audit every year and have asked you to provide them with all the IAM policy changes in Google Cloud since the last audit. You do not want to compromise on the security, and you want to expedite and streamline the audit analysis. How should you share the information requested by auditors?
A. Write a Cloud Function to poll relevant logs in Cloud Logging, extract the necessary data and push data to Cloud SQL Set up IAM rules to enable the audit team to query data from Cloud SQL instance.
B. Set up a sink destination to Google Cloud Storage bucket and export relevant logs from Cloud Logging. Set up IAM rules to enable the audit team access to log files in the bucket.
C. Set up alerts in Cloud Monitoring for each IAM policy change and send an email to the audit team.
D. Set up a sink destination to BigQuery and export relevant logs from Cloud Logging. In BigQuery, create views with the necessary data and restrict them to the audit team with ACLs.
19. Your company has installed millions of loT devices in numerous towns and cities all over the world. The loT devices record chemical and radiation pollution levels every minute and push the readings to Google Cloud. Which GCP service should you use to store these readings?
A. Google BigQuery
B. Google Cloud Storage
C. Google Cloud Bigtable
D. Google Cloud SQL
2. You work for an accountancy firm that supports accounting needs of several FTSE- 100 companies. Your customers expect high-quality agile service without compromising on release speed. Your company places a high value on being responsive to the requests and meeting the needs of customers small and big alike, without compromising on the security. Your company prides itself on placing security at the forefront of all other requirements. Taking into consideration the security requirements, how should you proceed with deploying a new update to the widely used accountancy application? (Choose 2 answers)
A. Update CI/CD pipeline to deploy changes only if unit-testing between components is successful.
B. Update CI/CD pipeline to ensure the application uses signed binaries from trusted repositories.
C. Ask the security team to review every code commit.
D. Update CI/CD pipeline to run a static source code security scan and deploy changes only if the scan doesn’t show up issues.
E. Update CI/CD pipeline to run a security vulnerability scan and deploy changes only if the scan doesn’t show up issues.
20. Your company wants to migrate its payroll and accounting solution from on-premises data centre to Google cloud. The finance department has requested the disruption be kept to a minimum as this application is heavily used for preparing the annual accounts. The existing security principles prevent you from storing passwords in Google Cloud. How should you authenticate users of finance department while minimizing the disruption?
A. Run Google Cloud Directory Sync to create user identities in Google.
B. Require users to update their Google password to be same as their corporate password.
C. Replicate passwords from AD to Google Identity with G Suite Password Sync.
D. Use SAML to federate authentication to the on-premises Identity Provider (IdP).
21. You work for a world’s leading global business publication which has over a million daily customers. Customers can get in touch with your customer service team either on phone, email or chat. You are redesigning the chat application on Google Cloud, and you want to ensure chat messages can’t be spoofed any anyone. What should you do?
A. Add HTTP headers to the messages to identify the originating user and the destination.
B. Use Public Key Infrastructure (PKI) to set up 2 Way SSL between client-side and server-server.
C. Use block-based encryption with a complex shared key to encrypt messages on client-side before relaying them over the network.
D. Encrypt the message on client-side with the user’s private key.
22. You recently deployed an update to a payroll application running on Google App Engine service, and several users started complaining about the slow performance. What should you do?
A. Rollback to the previous version. Roll forward to the new version at midnight when there is less traffic and look through Cloud Logging to debug the issue in the production environment. If necessary, enable Cloud Trace in your application for debugging.
B. Work with your telco partner to debug the performance issue.
C. Rollback to the previous version. Then, deploy the new version in a non-production environment and look through Cloud Logging to debug the issue. If necessary, enable Cloud Trace in your application for debugging.
D. Identify the cause in VPC flow logs before rolling back to the previous version.
23. Your company needs to move large quantities of analytics data every day from the on-premises data centre to Google Cloud. For the data transfers to happen within the agreed SLA, you require a network connection that is at least 20 Gbps. How should you design the connection between your on-premises network and GCP to enable this transfer within SLA?
A. Use Dedicated Interconnect between the on-premises network and the GCP network.
B. Use Dedicated Interconnect between the on-premises network and the GCP CDN edge.
C. Use a single Cloud VPN tunnel between the on-premises network and the GCP network.
D. Use a single Cloud VPN tunnel between the on-premises network and the GCP CDN edge.
24. Local health care regulations require your company to persist raw logs from all applications across all GCP projects for 5 years. What should you do?
A. Export logs from all projects to Google Cloud Storage.
B. Export logs from all projects to BigQuery.
C. Store logs in each project for 5 years.
D. Store logs in each project as per the default retention policies.
25. Your company developed an enhancement to a popular weather charting application. The enhancement offers several new features which the company hopes will establish itself as the market leader in weather charting. The enhancement has been validated internally by testing team and business users. The change management board is keen on testing these features with new customers in the production environment while allowing existing users to be served by the old version. Additionally, to prevent impact to existing users, you have been asked to identify how this can be implemented while retaining the existing DNS records and TLS certificates. What should you do?
A. Configure the Load Balancer for URL path-based routing and route to the appropriate version for each API path.
B. Add a new load balancer to serve traffic for the new version.
C. Configure requests from existing customers to use the new version endpoint.
D. Redirect requests on the old version to the new version.
26. Your company plans to migrate an existing application to Google Cloud. The application relies on URL path-based routing for its functionality. A requirements analyst has collected the following additional requirements from various teams. 1. The business owners of the application consider this as mission-critical for the work, and the application should scale based on traffic. 2. The application support team requires multiple isolated test environments to replicate live issues. 3. The enterprise architect has recommended basing the solution on open-source technology to allow portability to other Cloud providers in future. 4. The operations team have advised enabling continuous integration/delivery and being able to deploy application bundles using dynamic templates. 5. The application must retain its logs for 10 years while optimizing the storage costs. As the Cloud migration architect of your company, which combination of services should you consider for this application?
A. GKE, Cloud Storage, Jenkins, and Cloud Load Balancing.
B. GKE, Cloud Storage, Jenkins, and Helm.
C. GKE, Cloud Logging and Cloud Deployment Manager.
D. GKE, Cloud Logging and Cloud Load Balancing.
27. Your company stores logs from all applications and all projects in a centralized location. You are the Cyber Security Team Lead, and you have been asked to identify ways to prevent bad actors from tampering these logs. What can you do to verify that the logs are authentic?
A. Store logs in multiple locations.
B. Sign the log entry digitally and store the signature.
C. Store log entries in JSON format in a Cloud Storage Bucket.
D. Store logs in a database such as Cloud SQL or Cloud Spanner. Use IAM and ACLs to restrict who can modify the data.
28. You have deployed several batch jobs on preemptible Google Compute Engine Linux virtual machines. The batch jobs run overnight and can be restarted if interrupted. When interrupted, you want the instance to run a script to shutdown the batch job properly. What should you do?
A. Configure a shutdown script with xinetd service in Linux. Add a metadata tag with key as shutdown-script-url and value as service url.
B. When provisioning the VM, add a metadata tag with key as shutdown-script and value as the path to a local shutdown script.
C. Add a shutdown script to /etc/rc.6.d/directory.
D. Configure a shutdown script with xinetd service in Linux.
29. Your company has hybrid network architecture with workloads running primarily off the on-premises data centre and GCP as the failover location. One of the applications uses an on-premises MySQL database, and you have set up a Cloud SQL replica to replicate MySQL database to Google Cloud platform. At peak times, the updates to MySQL database are massive, and you need to ensure these changes can be reliably replicated to Cloud SQL. Your network administrator has recommended private address space communication between On-premises network and the GCP network. What network topology should you use?
A. Use Cloud VPN.
B. Set up a custom VPN server on a Google Compute Engine instance to connect to the on-premises network. Route all traffic through this server.
C. Use TLS & NAT translation gateway.
D. Use Dedicated Interconnect.
3. Your company has accumulated vast quantities of logs in Cloud Storage over the last 4 years. Local regulations require your company to persist logs for 90 days. Your company wants to cut costs by deleting logs older than 90 days. You want to minimize operational overhead and implement a solution that works for existing logs and new log entries. What should you do?
A. Write a custom script to retrieve all logs (IS -Irt gs:///**) in the bucketand delete logs older than 90 days. Schedule the script using cron.
B. Enable a lifecycle management rule to delete logs older than 90 days by pushing the lifecycle configuration in JSON format to the bucket using gsutil.
C. Write a custom script to retrieve top-level logs (t5 -It gs:///**) in the bucket and delete logs older than 90 days. Schedule the script using cron.
D. Enable a lifecycle management rule to delete logs older than 90 days by pushingthe lifecycle configuration in XML format to the bucket using gsutil.
30. You have recently deconstructed a huge monolith application into multiple microservices. At the application layer, all microservices are stateless and rely on a Cloud SQL instance to store and retrieve data. You have been asked by the security team to securely store the credentials to enable secure communication between the microservices and Cloud SQL instance. Where should you store them?
A. Store the database credentials in application source code.
B. Store the database credentials as environment variables in each microservice.
C. Store the database credentials in a file in Cloud Storage and restrict access through object ACL
D. Store the database credentials in GCP Secrets Management.
31. Your company owns several brands for online news publications and plans to migrate them to Google Cloud. All news publication applications have the same architecture – a web tier, an application (API) tier and a database tier – all in the same VPC. Each brand has a separate web tier, but all brands share the same application tier and database tier. You are the network administrator for your company, and you are asked to enable communication from Web tiers to application tier and from application tier to database tier. The security team has forbidden communication between web tiers and database tier. What should you do?
A. Add network tags to each tier. Configure firewall rules based on network tags to allow the desired traffic.
B. Install a firewall service on the individual VMs and configure to allow the desired traffic.
C. Add network tags to each tier. Configure firewall routes based on network tags to allow the desired traffic.
D. Set up different subnets for each tier.
32. Your company installs and services elevators and escalators. The elevators and escalators have several sensors that record several discrete items of information every second. Your company wants to use this information to detect service/maintenance needs accurately. What type of database should you use to store these items of information?
A. NoSQL
B. Relational.
C. Comma Separated File.
D. NAS.
33. Your testing team recently signed off a new release, and you have now deployed this to production, however, almost immediately, you started noticing performance issues which were not visible in the test environment. How can you adjust your test and deployment procedures to avoid such issues in future?
A. Carry out testing in test and staging environments with production-like volume.
B. Split the change into smaller units and deploy one unit at a time to identify what causes performance issues.
C. Deploy less number of changes to production.
D. Enable new version to 1% of users before rolling out to all users.
34. All your company’s workloads currently run from an on-premises data centre. The existing hardware is due for a refresh in 3 years, and your company has commissioned several teams to explore various cloud platforms. Your team is in charge of exploring Google Cloud, and you would like to carry out proof of concept work for migration of some workloads to GCP. Your manager has been asked for your suggestion on minimizing costs while enabling the proof of concept work to continue without committing for longer-term use. What should your recommendation be?
A. Use free tier where possible and sustained use discounts. Recruit a GCP cost management expert to help minimize operational cost.
B. Use free tier where possible and committed use discounts. Train the whole team to be aware of cost optimization techniques.
C. Use free tier where possible and sustained use discounts. Train the whole team to be aware of cost optimization techniques.
D. Use free tier where possible and committed use discounts. Recruit a GCP cost management expert to help minimize operational cost.
35. You recently migrated an application from on-premises Kubernetes cluster to Google Kubernetes Cluster. The application is forecasted to receive unpredictable/spiky traffic from next week, and your Team Lead has asked you to enable the GKE cluster node pool to scale on demand but not exceed more than 10 nodes. How should you configure the cluster?
A. Delete the existing cluster, create new GKE cluster by running: gcloud container clusters create weatherapp_cluster -enable-autoscaling -min-nodes=1 — max-nodes=10 and deploy the application to the new cluster.
B. Update existing GKE cluster to enable autoscaling and set min and max nodes by running: gcloud container clusters update weatherapp_cluster -enable-autoscaling -min-nodes=1 –max-nodes=10
C. Add tags to the instance to enable autoscaling and set max nodes to 10 by running: gcloud compute instances add-tags – tags enable-autoscaling – max-nodes-10
D. Resize the GKE cluster node pool to have 10 nodes, enough to handle spikes in traffic by running: gcloud container clusters resize weatherapp_cluster -size 10
36. Your company specializes in clickstream analytics and uses advanced machine learning to identify opportunities for further growth. Your company recently won a contract to carry out these analytics for a leading retail platform. The retail platform has a large user base all over the world and generates up to 10,000 clicks per second during sale periods. What GCP service should you use to store these clickstream event messages?
A. Google Cloud Datastore.
B. Google Cloud Bigtable.
C. Google Cloud SQL.
D. Google Cloud Storage.
37. Your company recently migrated an application from the on-premises data centre to Google Cloud by lifting and shifting the VMs and the database. The application on Google Cloud uses 3 Google Compute Engine instances – 2 for the python application tier and 1 for MySQL database with 80GB disk. The application has started experiencing performance issues. Your operations team noticed both the CPU and memory utilization on MySQL database are low, but the throughput and network IOPS is maxed out. What should you do to increase the performance?
A. Resize SSD persistent disk dynamically to 400 GB.
B. Increase CPU & RAM to 64 GB to compensate for throughput and network IOPS.
C. Migrate the database to Cloud SQL for PostgreSQL.
D. Use BigQuery instead of MySQL.
38. Your company is a leading online news media organization that has customers all over the world. The number of paying subscribers is over 2 million, and the free subscribers stand at 9 million. The user engagement team has identified 7.6 million active users among the free subscribers. It plans to send them an email with links to the current monthly and quarterly promotions to convert them into paying subscribers. The user engagement team is unsure on the click-through rate and has asked for a cost-efficient solution that can scale to handle anything between 100 clicks to 1 million clicks per day. What should you do?
A. 1. Save user data in Cloud SQL. 2. Serve the web tier on a single Google Compute Engine Instance.
B. 1. Save user data in Cloud Datastore. 2. Serve the web tier on App Engine Standard Service.
C. 1. Save user data in Cloud BigTable. 2. Serve the web tier on a fleet of Google Compute Engine Instances in a MIG.
D. 1. Save user data in SSD Persistent Disks. 2. Serve the web tier on GKE.
39. You just migrated an application to Google Compute Engine. You deployed it across several regions – with an HTTP(s) Load Balancer and Managed Instance Groups (MIG) in each region that provision instances across multiple zones with just internal IP addresses. You noticed the instances are being terminated and relaunched every 45 seconds. The instance itself comes up within 20 seconds and is responding to cURL requests on localhost. What should you do the fix the issue?
A. Modify the instance template to add public IP addresses to the VMs, terminate all existing instances, update Load Balancer configuration to contact instances on public IP.
B. Add a load balancer tag to each instance. Set up a firewall rule to allow traffic from the Load Balancer to all instances with this tag.
C. Make sure firewall rules allow health check traffic to the VM instances.
D. Check load balancer firewall rules and ensure it can receive HTTP(s) traffic.
4. A mission-critical application has experienced outage recently due to a new release. The release is automatically deployed by a continuous deployment pipeline for a project stored in Github repository. Currently, a commit when merged to the main branch triggers the deployment pipeline, which deploys the new release to the production environment. Your change management committee has asked you if it is possible to verify (test) the release artefacts before deploying them to production. What should you do?
A. Continue using the existing CI/CD solution to deploy new releases to the production, but enable a mechanism to roll back quickly, e.g. Blue/Green deployments etc.
B. Continue using the existing CI/CD solution to deploy new releases to the production environment and carry out testing with live-traffic.
C. Configure the CI/CD solution to monitor tags in the repository. Deploy non-production tags to staging environments. After testing the changes, create production tags and deploy them to the production environment.
D. Use App Engine’s traffic splitting feature to enable the new version for 1% of users before rolling it out to all users.
40. Your company enabled set Identity and Access Management (IAM) policies at different levels of the resource hierarchy – VMs, projects, folders and organization. You are the security administrator for your company, and you have been asked to identify the effective policy that applies for a particular VM. What should your response be?
A. The effective policy for the VM is the policy assigned directly to the VM and restricted by the policies of its parent resource.
B. The effective policy for the VM is a union of the policy assigned to the VM and the policies it inherits from its parent resource.
C. The effective policy for the VM is the policy assigned directly to the VM.
D. The effective policy for the VM is an intersection of the policy assigned to the VM and the policies it inherits from its parent resource.
41. You have recently migrated an on-premise MySQL database to a Linux Compute Engine instance in Google Cloud. A sudden burst of traffic has seen the free space in MySQL server go down from 72% to 2%. What can you do to remediate the problem while minimizing the downtime?
A. Increase the size of the SSD persistent disk and verify the change by running fdisk command.
B. Increase the size of the SSD persistent disk. Resize the disk by running resize2fs.
C. Add a new larger SSD disk and move the database files to the new disk. Shut down the compute engine instance. Increase the size of SSD persistent disk and start the instance.
D. Restore snapshot of the existing disk to a bigger disk, update the instance to use new disk and restart the database.
42. You have recently deconstructed a huge monolith application into numerous microservices. Most requests are processed within the SLA, but some requests take a lot of time. As the application architect, you have been asked to identify which microservices take the longest. What should you do?
A. Send metrics from each microservice at each request start and request end to custom Cloud Monitoring metrics.
B. Decrease timeouts on each microservice so that requests fail faster and are retried.
C. Update your application with Cloud Trace and break down the latencies at each microservice.
D. Look for APIs with high latency in Cloud Monitoring Insights.
43. Your company developed a new weather forecasting application and deployed it in Google cloud. The application is deployed on autoscaled MIG across two zones, and the compute layer relies on a Cloud SQL database. The company launched a trial on a small group of employees and now wishes to expand this trial to a larger group, including unauthenticated public users. You have been asked to ensure the application response is within the SLA. What resilience testing strategy should you use to achieve this?
A. Capture the trial traffic and replay several instances of it simultaneously until all layers autoscale. Then, start terminating resources in one of the zones randomly.
B. Simulate user traffic until one of the application layer autoscales. Then, start chaos engineering by terminating resources randomly across both zones.
C. Start sending more traffic to the application until all layers autoscales. Then, start chaos engineering by terminating resources randomly across both zones.
D. Estimate the expected traffic and update the minimum size of the Managed Instance Group (MIG) to handle 200% of the expected traffic.
44. You designed a mission-critical application to have no single point of failure, yet the application suffered an outage recently. The post-mortem analysis has identified the failed component as the database layer. The Cloud SQL instance has a failover replica, but the replica was not promoted to primary. Your operations team have asked your recommendation for preventing such issues in future. What should you do?
A. Snapshot database more frequently.
B. Migrate database to an instance with more CPU.
C. Carry out planned failover periodically.
D. Migrate to a different database.
45. Your company has accumulated 200 TB of logs in the on-premises data centre. Your bespoke data warehousing analytics application, which processes these logs and runs in the on-premises data centre, doesn’t autoscales and is struggling to cope up with the massive volume of data. Your infrastructure architect has estimated the data centre to run out of space in 12 months. Your company would like to move the archive logs to Google Cloud for long term storage and explore a replacement solution for its analytics needs. What would you recommend? (Choose two answers)
A. Migrate logs to Cloud SQL for long term storage and analytics.
B. Migrate log files to Cloud Logging for long term storage.
C. Migrate log files to Google Cloud Storage for long term storage.
D. Import logs from Google Cloud Storage to Google BigQuery for analytics.
E. Import logs to Google Cloud Bigtable for analytics.
46. Your company holds multiple petabytes of data which includes historical stock prices for all stocks from all world financial markets. Your company now wants to migrate this data to Cloud. The financial analysts at your company have years of SQL experience, work round the clock and heavily depend on the historical data for predicting future stock prices. The chief analyst has asked you to ensure data is always available and minimize the impact on their team. Which GCP service should you use to store this data?
A. Google Cloud Storage.
B. Google Cloud SQL.
C. Google BigQuery.
D. Google Cloud Datastore.
47. Your company on-premises data centre is running out of space, and your CTO thinks the cost of migrating and running all development environments in Google Cloud Platform is cheaper than the capital expenditure required to expand the existing data centre. The development environments are currently subject to multiple stop, start and reboot events throughout the day and persist state across restarts. How can you design this solution on Google Cloud Platform while enabling your CTO view operational costs on an ongoing basis?
A. Export detailed Google cloud billing data to BigQuery and visualize cost reports in Google Data Studio.
B. Use Google Comput Engine VMs with Local SSD disks to store state across restarts.
C. Run gcloud compute instances set-disk-auto-delete on SSD persistent disks before stopping/rebooting the VM.
D. Use Google Comput Engine VMs with persistent disks to store state across restarts.
E. Apply labels on VMs to export their costs to BigQuery dataset.
48. Your company specializes in clickstream analytics and uses cutting edge Al-driven analysis to identify opportunities for further growth. Most customers have their clickstream analytics evaluated in a batch every hour, but some customers pay more to have their clickstream analytics evaluated in real-time. Your company would now like to migrate the analytics solution from the on-premises data centre to Google Cloud and is keen on selecting a service that offers both batch processing for hourly jobs and live processing (real-time) for stream jobs. Which GCP service should you use for this requirement while minimizing cost?
A. Google Cloud Dataproc.
B. Google Compute Engine with Google BigQuery.
C. Google Kubernetes Engine with Bigtable.
D. Google Cloud Dataflow.
49. Your company recently acquired a health care start-up. Both your company and the acquired start-up have accumulated terabytes of reporting data in respective data centres. You have been asked for your recommendation on the best way to detect anomalies in the reporting data. You wish to use services on Google Cloud platform to achieve this. What should your recommendation be?
A. Upload reporting data of both companies to a Cloud Storage bucket, point
B. Datalab at the bucket and clean data as necessary.
C. Configure Cloud Dataprep to connect to your on-premises reporting systems and clean data as necessary.
D. Upload reporting data of both companies to a Cloud Storage bucket, explore the bucket data in Cloud Dataprep and clean data as necessary.
E. Configure Cloud Datalab to connect to your on-premises reporting systems and clean your data as necessary.
5. Your company has a deadline to migrate all on-premises applications to Google Cloud Platform. Due to stringent timelines, your company decided to “lift and shift” all applications to Google Compute Engine. Simultaneously, your company has commissioned a small team to identify a better cloud-native solution for the migrated applications. You are the Team Lead, and one of your main requirements is to identify suitable compute services that can scale automatically and require minimal operational overhead (no-ops). Which GCP Compute Services should you use? (Choose two)
A. Use Google App Engine Standard.
B. Use Google Kubernetes Engine (GKE).
C. Use Managed Instance Groups (MIG) Compute Engine instances.
D. Use Google Compute Engine with custom VM images.
E. Use custom container orchestration on Google Compute Engine.
50. You have a business-critical application deployed in a non-autoscaling Managed Instance Group (MIG). The application is currently not responding to any requests. Your operations team analyzed the logs and discovered the instances keep restarting every 30 seconds. Your Team Lead would like to login to the instance to debug the issue. What should you do to enable your Team Lead login to the VMs?
A. Disable autoscaling and add your Team Lead’s SSH key to the project-wide SSH Keys.
B. Carry out a rolling restart on the Managed Instance Group (MIG).
C. Disable Managed Instance Group (MIG) health check and add your
D. Team Lead’s SSH key to the project-wide SSH keys.
E. Grant your Team Lead Project Viewer IAM role.
6. You enabled a cron job on a Google Compute Engine to trigger a python API that connects to Google BigQuery to query data. The script complains that it is unable to connect to BigQuery. How should you fix this issue?
A. Install gcloud SDK, gsutil, and bq components on the VM.
B. Provision a new VM with BigQuery access scope enabled, and migrate both the cron job and python API to the new VM.
C. Configure the Python API to use a service account with relevant BigQuery access enabled.
D. Update Python API to use the latest BigQuery API client library.
7. You deployed an application in App Engine Standard service that uses indexes in Datastore for every query your application makes. You recently discovered a runtime issue in the application and attributed this to missing Cloud Datastore Indexes. Your manager has asked you to create new indexes in Cloud Datastore by deploying a YAML configuration file. How should you do it?
A. Upload the YAML configuration file to a Cloud Storage bucket and point the index configuration in App Engine application to this location.
B. Run gcloud datastore indexes create .
C. In GCP Datastore Admin console, delete current configuration YAML file and upload a new configuration YAML file.
D. Send a request to the App Engine’s built-in HTTP modules to update the index configuration file for your application.
8. You configured a CI/CD pipeline to deploy changes to your production application, which runs on GCP Compute Engine Managed Instance Group with auto-healing enabled. A recent deployment has caused a loss of functionality in the application. Debugging could take a long time, and downtime is a loss of revenue for your company. What should you do?
A. Deploy the old codebase directly on the VM using custom scripts.
B. Revert changes in Github repository and let the CI/CD pipeline deploy the
C. previous codebase to the production environment.
D. Fix the issue directly on the VM.
E. Modify the Managed Instance Group (MIG) to use the previous instance template, terminate all instances and let autohealing bring back the instances on the previous template.
9. Your company has deployed a wide range of application across several Google Cloud projects in the organization. You are a security engineer within the Cloud Security team, and an apprentice has recently joined your team. To gain a better understanding of your company’s Google cloud estate, the apprentice has asked you to provide them access which lets them have detailed visibility of all projects in the organization. Your manager has approved the request but has asked you to ensure the access does not let them edit/write access to any resources. Which IAM roles should you assign to the apprentice?
A. Organization Owner and Project Owner roles.
B. Organization Viewer and Project Owner roles.
C. Organization Viewer and Project Viewer roles.
D. Organization Owner and Project Viewer roles.