100% satisfaction guarantee Immediately available after payment Both online and in PDF No strings attached
logo-home
GCP PDE important questions & answers $9.49   Add to cart

Exam (elaborations)

GCP PDE important questions & answers

 0 view  0 purchase
  • Course
  • Institution

In this document we have all important questions related to Google Cloud Professional Data Engineer certification exam.

Preview 4 out of 76  pages

  • October 6, 2024
  • 76
  • 2024/2025
  • Exam (elaborations)
  • Questions & answers
avatar-seller
PROFESSIONAL DATA ENGINEERING IMPORTANT QUESTIONS

1. Your company built a TensorFlow neutral-network model with a large number of neurons
and layers. The model fits well for the training data. However, when tested against new data,
it performs poorly. What method can you employ to address this?

• A. Threading
• B. Serialization
• C. Dropout Methods
• D. Dimensionality Reduction
2. You are building a model to make clothing recommendations. You know a user's fashion
preference is likely to change over time, so you build a data pipeline to stream new data back
to the model as it becomes available. How should you use this data to train the model?

• A. Continuously retrain the model on just the new data.
• B. Continuously retrain the model on a combination of existing data and the new
data.
• C. Train on the existing data while using the new data as your test set.
• D. Train on the new data while using the existing data as your test set.
3. You designed a database for patient records as a pilot project to cover a few hundred
patients in three clinics. Your design used a single database table to represent all patients and
their visits, and you used self-joins to generate reports. The server resource utilization was at
50%. Since then, the scope of the project has expanded. The database must now store 100
times more patient records. You can no longer run the reports, because they either take too
long or they encounter errors with insufficient compute resources. How should you adjust
the database design?

• A. Add capacity (memory and disk space) to the database server by the order of 200.
• B. Shard the tables into smaller ones based on date ranges, and only generate reports
with prespecified date ranges.
• C. Normalize the master patient-record table into the patient table and the visits
table, and create other necessary tables to avoid self-join.
• D. Partition the table into smaller tables, with one for each clinic. Run queries against
the smaller table pairs, and use unions for consolidated reports.
4. You create an important report for your large team in Google Data Studio 360. The report
uses Google BigQuery as its data source. You notice that visualizations are not showing data
that is less than 1 hour old. What should you do?

• A. Disable caching by editing the report settings.
• B. Disable caching in BigQuery by editing table details.
• C. Refresh your browser tab showing the visualizations.
• D. Clear your browser history for the past hour then reload the tab showing the
virtualizations.
5. An external customer provides you with a daily dump of data from their database. The
data flows into Google Cloud Storage GCS as comma-separated values
(CSV) files. You want to analyze this data in Google BigQuery, but the data could have rows
that are formatted incorrectly or corrupted. How should you build this pipeline?

, • A. Use federated data sources, and check data in the SQL query.
• B. Enable BigQuery monitoring in Google Stackdriver and create an alert.
• C. Import the data into BigQuery using the gcloud CLI and set max_bad_records to 0.
• D. Run a Google Cloud Dataflow batch pipeline to import the data into
BigQuery, and push errors to another dead-letter table for analysis.
6. Your weather app queries a database every 15 minutes to get the current temperature. The
frontend is powered by Google App Engine and server millions of users. How should you
design the frontend to respond to a database failure?

• A. Issue a command to restart the database servers.
• B. Retry the query with exponential backoff, up to a cap of 15 minutes.
• C. Retry the query every second until it comes back online to minimize staleness of
data.
• D. Reduce the query frequency to once every hour until the database comes back
online.
7. You are creating a model to predict housing prices. Due to budget constraints, you must
run it on a single resource-constrained virtual machine. Which learning algorithm should you
use?

• A. Linear regression
• B. Logistic classification
• C. Recurrent neural network
• D. Feedforward neural network
8. You are building new real-time data warehouse for your company and will use Google
BigQuery streaming inserts. There is no guarantee that data will only be sent in once but you
do have a unique ID for each row of data and an event timestamp. You want to ensure that
duplicates are not included while interactively querying data. Which query type should you
use?

• A. Include ORDER BY DESK on timestamp column and LIMIT to 1.
• B. Use GROUP BY on the unique ID column and timestamp column and SUM on the
values.
• C. Use the LAG window function with PARTITION by unique ID along with
WHERE LAG IS NOT NULL.
• D. Use the ROW_NUMBER window function with PARTITION by unique ID
along with WHERE row equals 1.
9. Your company is using WILDCARD tables to query data across multiple tables with
similar names. The SQL statement is currently failing with the following error:

,Which table name will make the SQL statement work correctly?

• A. 'bigquery-public-data.noaa_gsod.gsod'
• B. bigquery-public-data.noaa_gsod.gsod*
• C. 'bigquery-public-data.noaa_gsod.gsod'*
• D. 'bigquery-public-data.noaa_gsod.gsod*`
10. Your company is in a highly regulated industry. One of your requirements is to ensure
individual users have access only to the minimum amount of information required to do their
jobs. You want to enforce this requirement with Google BigQuery. Which three approaches
can you take? (Choose three.)

• A. Disable writes to certain tables.
• B. Restrict access to tables by role.
• C. Ensure that the data is encrypted at all times.
• D. Restrict BigQuery API access to approved users.
• E. Segregate data across multiple tables or databases.
• F. Use Google Stackdriver Audit Logging to determine policy violations.
11. You are designing a basket abandonment system for an ecommerce company. The
system will send a message to a user based on these rules:
✑ No interaction by the user on the site for 1 hour
Has added more than $30 worth of products to the basket

✑ Has not completed a transaction
You use Google Cloud Dataflow to process the data and decide if a message should be sent.
How should you design the pipeline?

• A. Use a fixed-time window with a duration of 60 minutes.
• B. Use a sliding time window with a duration of 60 minutes.
• C. Use a session window with a gap time duration of 60 minutes.
• D. Use a global window with a time based trigger with a delay of 60 minutes.
12. Your company handles data processing for a number of different clients. Each client
prefers to use their own suite of analytics tools, with some allowing direct query access via
Google BigQuery. You need to secure the data so that clients cannot see each other's data.
You want to ensure appropriate access to the data.
Which three steps should you take? (Choose three.)

• A. Load data into different partitions.
• B. Load data into a different dataset for each client.

, • C. Put each client's BigQuery dataset into a different table.
• D. Restrict a client's dataset to approved users.
• E. Only allow a service account to access the datasets.
• F. Use the appropriate identity and access management (IAM) roles for each
client's users.
13. You want to process payment transactions in a point-of-sale application that will run on
Google Cloud Platform. Your user base could grow exponentially, but you do not want to
manage infrastructure scaling.
Which Google database service should you use?

• A. Cloud SQL
• B. BigQuery
• C. Cloud Bigtable
• D. Cloud Datastore
14. You want to use a database of information about tissue samples to classify future tissue
samples as either normal or mutated. You are evaluating an unsupervised anomaly detection
method for classifying the tissue samples. Which two characteristic support this method?
(Choose two.)

• A. There are very few occurrences of mutations relative to normal samples.
• B. There are roughly equal occurrences of both normal and mutated samples in the
database.
• C. You expect future mutations to have different features from the mutated samples in
the database.
• D. You expect future mutations to have similar features to the mutated samples
in the database.
• E. You already have labels for which samples are mutated and which are normal in the
database.
15. You need to store and analyze social media postings in Google BigQuery at a rate of
10,000 messages per minute in near real-time. Initially, design the application to use
streaming inserts for individual postings. Your application also performs data aggregations
right after the streaming inserts. You discover that the queries after streaming inserts do not
exhibit strong consistency, and reports from the queries might miss in-flight data. How can
you adjust your application design?

• A. Re-write the application to load accumulated data every 2 minutes.
• B. Convert the streaming insert code to batch load for individual messages.
• C. Load the original message to Google Cloud SQL, and export the table every hour to
BigQuery via streaming inserts.
• D. Estimate the average latency for data availability after streaming inserts, and
always run queries after waiting twice as long.
16. Your startup has never implemented a formal security policy. Currently, everyone in the
company has access to the datasets stored in Google BigQuery. Teams have freedom to use
the service as they see fit, and they have not documented their use cases. You have been
asked to secure the data warehouse. You need to discover what everyone is doing. What
should you do first?

• A. Use Google Stackdriver Audit Logs to review data access.

The benefits of buying summaries with Stuvia:

Guaranteed quality through customer reviews

Guaranteed quality through customer reviews

Stuvia customers have reviewed more than 700,000 summaries. This how you know that you are buying the best documents.

Quick and easy check-out

Quick and easy check-out

You can quickly pay through credit card or Stuvia-credit for the summaries. There is no membership needed.

Focus on what matters

Focus on what matters

Your fellow students write the study notes themselves, which is why the documents are always reliable and up-to-date. This ensures you quickly get to the core!

Frequently asked questions

What do I get when I buy this document?

You get a PDF, available immediately after your purchase. The purchased document is accessible anytime, anywhere and indefinitely through your profile.

Satisfaction guarantee: how does it work?

Our satisfaction guarantee ensures that you always find a study document that suits you well. You fill out a form, and our customer service team takes care of the rest.

Who am I buying these notes from?

Stuvia is a marketplace, so you are not buying this document from us, but from seller naveenbamaprofessional. Stuvia facilitates payment to the seller.

Will I be stuck with a subscription?

No, you only buy these notes for $9.49. You're not tied to anything after your purchase.

Can Stuvia be trusted?

4.6 stars on Google & Trustpilot (+1000 reviews)

85169 documents were sold in the last 30 days

Founded in 2010, the go-to place to buy study notes for 14 years now

Start selling
$9.49
  • (0)
  Add to cart