Google Professional Cloud Architect Practice Exam PR000088
gcp-examquestions2020-08-27T21:07:44+07:00Notes: Hi all, Google Professional Cloud Architect Practice Exam will familiarize you with types of questions you may encounter on the certification exam and help you determine your readiness or if you need more preparation and/or experience. Successful completion of the practice exam does not guarantee you will pass the certification exam as the actual exam is longer and covers a wider range of topics. We highly recommend you should take Google Professional Cloud Architect Guarantee Part because it includes real questions and highlighted answers are collected in our exam. It will help you pass the exam in an easier way.
For PDF Version: https://gcp-examquestions.com/gcp-pro-cloud-architect-practice/
Quiz-summary
0 of 20 questions completed
Questions:
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
Information
GCP-CLOUD-PROFESSIONAL-ARC-PART1
You have already completed the quiz before. Hence you can not start it again.
Quiz is loading...
You must sign in or sign up to start the quiz.
You have to finish following quiz, to start this quiz:
Results
0 of 20 questions answered correctly
Your time:
Time has elapsed
You have reached 0 of 0 points, (0)
Categories
- Not categorized 0%
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- Answered
- Review
- Question 1 of 20
1. Question
For this question, refer to the TerramEarth case study.
https://cloud.google.com/certification/guides/cloud-architect/casestudy-terramearth-rev2Because you do not know every possible future use for the data TerramEarth collects, you have decided to build a system that captures and stores all raw data in case you need it later. How can you most cost-effectively accomplish this goal?
CorrectIncorrectHint
Hint Answers: D is correct because several load-balanced Compute Engine VMs would suffice to ingest 9 TB per day, and Cloud Storage is the cheapest per-byte storage offered by Google. Depending on the format, the data could be available via BigQuery immediately, or shortly after running through an ETL job. Thus, this solution meets business and technical requirements while optimizing for cost.
https://cloud.google.com/bigquery/quotas#streaming_inserts
https://cloud.google.com/blog/products/data-analytics/10-tips-for-building-long-running-clusters-using-cloud-dataproc
https://cloud.google.com/blog/products/gcp/fastest-track-to-apache-hadoop-and-spark-success-using-job-scoped-clusters-on-cloud-native-architecture - Question 2 of 20
2. Question
Today, TerramEarth maintenance workers receive interactive performance graphs for the last 24 hours (86,400 events) by plugging their maintenance tablets into the vehicle. The support group wants support technicians to view this data remotely to help troubleshoot problems. You want to minimize the latency of graph loads. How should you provide this functionality?
CorrectIncorrectHint
Hint Answers: B is correct because Cloud Bigtable is optimized for time-series data. It is cost-efficient, highly available, and low-latency. It scales well. Best of all, it is a managed service that does not require significant operations work to keep running.
https://cloud.google.com/bigtable/docs/schema-design-time-series#time-series-cloud-bigtable
https://cloud.google.com/bigquery/external-data-sources - Question 3 of 20
3. Question
Your agricultural division is experimenting with fully autonomous vehicles. You want your architecture to promote strong security during vehicle operation. Which two architecture characteristics should you consider?
CorrectIncorrectHint
Hint Answers: E is correct because this improves system security by making it more resistant to hacking, especially through man-in-the-middle attacks between modules.
F is correct because this improves system security by making it more resistant to hacking, especially rootkits or other kinds of corruption by malicious actors.
https://en.wikipedia.org/wiki/Trusted_Platform_Module - Question 4 of 20
4. Question
Which of TerramEarth’s legacy enterprise processes will experience significant change as a result of increased Google Cloud Platform adoption?
CorrectIncorrectHint
Hint Answers: B is correct because all of these tasks are big changes when moving to the cloud. Capacity planning for cloud is different than for on-premises data centers; TCO calculations are adjusted because TerramEarth is using services, not leasing/buying servers; OpEx/CapEx allocation is adjusted as services are consumed vs. using capital expenditures.
https://assets.kpmg/content/dam/kpmg/pdf/2015/11/cloud-economics.pdf - Question 5 of 20
5. Question
You analyzed TerramEarth’s business requirement to reduce downtime and found that they can achieve a majority of time saving by reducing customers’ wait time for parts. You decided to focus on reduction of the 3 weeks’ aggregate reporting time. Which modifications to the company’s processes should you recommend?
CorrectIncorrectHint
Hint Answers: C is correct because using cellular connectivity will greatly improve the freshness of data used for analysis from where it is now, collected when the machines are in for maintenance. Streaming transport instead of periodic FTP will tighten the feedback loop even more. Machine learning is ideal for predictive maintenance workloads.
- Question 6 of 20
6. Question
Your company wants to deploy several microservices to help their system handle elastic loads. Each microservice uses a different version of software libraries. You want to enable their developers to keep their development environment in sync with the various production services. Which technology should you choose?
CorrectIncorrectHint
Hint Answers: B is correct because using containers for development, test, and production deployments abstracts away system OS environments, so that a single host OS image can be used for all environments. Changes that are made during development are captured using a copy-on-write filesystem, and teams can easily publish new versions of the microservices in a repository.
- Question 7 of 20
7. Question
Your company wants to track whether someone is present in a meeting room reserved for a scheduled meeting. There are 1000 meeting rooms across 5 offices on 3 continents. Each room is equipped with a motion sensor that reports its status every second. You want to support the data upload and collection needs of this sensor network. The receiving infrastructure needs to account for the possibility that the devices may have inconsistent connectivity. Which solution should you design?
CorrectIncorrectHint
Hint Answers: C is correct because Cloud Pub/Sub can handle the frequency of this data, and consumers of the data can pull from the shared topic for further processing.
https://cloud.google.com/sql/
https://cloud.google.com/pubsub/ - Question 8 of 20
8. Question
Your company wants to try out the cloud with low risk. They want to archive approximately 100 TB of their log data to the cloud and test the analytics features available to them there, while also retaining that data as a long-term disaster recovery backup. Which two steps should they take?
CorrectIncorrectHint
Hint Answers: A is correct because BigQuery is the fully managed cloud data warehouse for analytics and supports the analytics requirement.
E is correct because Cloud Storage provides the Coldline storage class to support long-term storage with infrequent access, which would support the long-term disaster recovery backup requirement.
https://cloud.google.com/bigquery/
https://cloud.google.com/stackdriver/
https://cloud.google.com/storage/docs/storage-classes#coldline
https://cloud.google.com/sql/
https://cloud.google.com/bigtable/ - Question 9 of 20
9. Question
You set up an autoscaling instance group to serve web traffic for an upcoming launch. After configuring the instance group as a backend service to an HTTP(S) load balancer, you notice that virtual machine (VM) instances are being terminated and re-launched every minute. The instances do not have a public IP address. You have verified that the appropriate web response is coming from each instance using the curl command. You want to ensure that the backend is configured correctly. What should you do?
CorrectIncorrectHint
Hint Answers: C is correct because health check failures lead to a VM being marked unhealthy and can result in termination if the health check continues to fail. Because you have already verified that the instances are functioning properly, the next step would be to determine why the health check is continuously failing.
https://cloud.google.com/load-balancing/docs/health-check-concepts
https://cloud.google.com/load-balancing/docs/https/ - Question 10 of 20
10. Question
Your organization has a 3-tier web application deployed in the same network on Google Cloud Platform. Each tier (web, API, and database) scales independently of the others. Network traffic should flow through the web to the API tier, and then on to the database tier. Traffic should not flow between the web and the database tier. How should you configure the network?
CorrectIncorrectHint
Hint Answers: D is correct because as instances scale, they will all have the same tag to identify the tier. These tags can then be leveraged in firewall rules to allow and restrict traffic as required, because tags can be used for both the target and source.
https://cloud.google.com/vpc/docs/using-vpc
https://cloud.google.com/vpc/docs/routes
https://cloud.google.com/vpc/docs/add-remove-network-tags - Question 11 of 20
11. Question
Your organization has 5 TB of private data on premises. You need to migrate the data to Cloud Storage. You want to maximize the data transfer speed. How should you migrate the data?
CorrectIncorrectHint
Hint Answers: A is correct because gsutil gives you access to write data to Cloud Storage.
https://cloud.google.com/storage/docs/gsutil
https://cloud.google.com/sdk/gcloud/
https://cloud.google.com/storage/docs/json_api/v1/how-tos/upload
https://cloud.google.com/storage/docs/uploading-objects
https://cloud.google.com/storage-transfer/docs/overview - Question 12 of 20
12. Question
You are designing a mobile chat application. You want to ensure that people cannot spoof chat messages by proving that a message was sent by a specific user. What should you do?
CorrectIncorrectHint
Hint Answers: D is correct because PKI requires that both the server and the client have signed certificates, validating both the client and the server.
- Question 13 of 20
13. Question
You are designing a large distributed application with 30 microservices. Each of your distributed microservices needs to connect to a database backend. You want to store the credentials securely. Where should you store the credentials?
CorrectIncorrectHint
Hint Answers: C is correct because key management systems generate, use, rotate, encrypt, and destroy cryptographic keys and manage permissions to those keys.
https://cloud.google.com/kms/ - Question 14 of 20
14. Question
For this question, refer to the Mountkirk Games case study.
https://cloud.google.com/certification/guides/cloud-architect/casestudy-mountkirkgames-rev2Mountkirk Games wants to set up a real-time analytics platform for their new game. The new platform must meet their technical requirements. Which combination of Google technologies will meet all of their requirements?
CorrectIncorrectHint
Hint Answers: B is correct because:
-Cloud Dataflow dynamically scales up or down, can process data in real time, and is ideal for processing data that arrives late using Beam windows and triggers.
-Cloud Storage can be the landing space for files that are regularly uploaded by users’ mobile devices.
-Cloud Pub/Sub can ingest the streaming data from the mobile users.
BigQuery can query more than 10 TB of historical data.
https://cloud.google.com/sql/docs/quotas#fixed-limits
https://beam.apache.org/documentation/programming-guide/#triggers
https://cloud.google.com/solutions/using-apache-hive-on-cloud-dataproc
https://beam.apache.org/documentation/programming-guide/#windowing
https://cloud.google.com/bigquery/external-data-sources - Question 15 of 20
15. Question
Mountkirk Games has deployed their new backend on Google Cloud Platform (GCP). You want to create a thorough testing process for new versions of the backend before they are released to the public. You want the testing environment to scale in an economical way. How should you design the process?
CorrectIncorrectHint
Hint Answers: A is correct because simulating production load in GCP can scale in an economical way.
https://cloud.google.com/community/tutorials/load-testing-iot-using-gcp-and-locust
https://github.com/GoogleCloudPlatform/distributed-load-testing-using-kubernetes - Question 16 of 20
16. Question
N/A
CorrectIncorrectHint
Hint Answers: C is correct because:
-Google Kubernetes Engine is ideal for deploying small services that can be updated and rolled back quickly. It is a best practice to manage services using immutable containers.
-Cloud Load Balancing supports globally distributed services across multiple regions. It provides a single global IP address that can be used in DNS records. Using URL Maps, the requests can be routed to only the services that Mountkirk wants to expose.
-Container Registry is a single place for a team to manage Docker images for the services. - Question 17 of 20
17. Question
Your customer is moving their corporate applications to Google Cloud Platform. The security team wants detailed visibility of all resources in the organization. You use Resource Manager to set yourself up as the org admin. What Cloud Identity and Access Management (Cloud IAM) roles should you give to the security team?
CorrectIncorrectHint
Hint Answers: B is correct because:
-Org viewer grants the security team permissions to view the organization’s display name.
-Project viewer grants the security team permissions to see the resources within projects.
https://cloud.google.com/resource-manager/docs/access-control-org#using_predefined_roles - Question 18 of 20
18. Question
To reduce costs, the Director of Engineering has required all developers to move their development infrastructure resources from on-premises virtual machines (VMs) to Google Cloud Platform. These resources go through multiple start/stop events during the day and require state to persist. You have been asked to design the process of running a development environment in Google Cloud while providing cost visibility to the finance department. Which two steps should you take?
CorrectIncorrectHint
Hint Answers: A is correct because persistent disks will not be deleted when an instance is stopped.
D is correct because exporting daily usage and cost estimates automatically throughout the day to a BigQuery dataset is a good way of providing visibility to the finance department. Labels can then be used to group the costs based on team or cost center. - Question 19 of 20
19. Question
Your company has decided to make a major revision of their API in order to create better experiences for their developers. They need to keep the old version of the API available and deployable, while allowing new customers and testers to try out the new API. They want to keep the same SSL and DNS records in place to serve both APIs. What should they do?
CorrectIncorrectHint
Hint Answers: D is correct because an HTTP(S) load balancer can direct traffic reaching a single IP to different backends based on the incoming URL.
https://cloud.google.com/load-balancing/docs/https/url-map
https://cloud.google.com/load-balancing/docs/backend-service
https://cloud.google.com/load-balancing/docs/https/global-forwarding-rules - Question 20 of 20
20. Question
The database administration team has asked you to help them improve the performance of their new database server running on Compute Engine. The database is used for importing and normalizing the company’s performance statistics. It is built with MySQL running on Debian Linux. They have an n1-standard-8 virtual machine with 80 GB of SSD zonal persistent disk. What should they change to get better performance from this system in a cost-effective manner?
CorrectIncorrectHint
Hint Answers: C is correct because persistent disk performance is based on the total persistent disk capacity attached to an instance and the number of vCPUs that the instance has. Incrementing the persistent disk capacity will increment its throughput and IOPS, which in turn improve the performance of MySQL.
https://cloud.google.com/compute/docs/disks/#pdspecs
https://cloud.google.com/compute/docs/disks/performance
Leave a Reply