DP - 600 FABRIC CERTIFICATION EXAM QUESTIONS WITH
100% VERIFIED ANSWERS
What is Microsoft Fabric - ANSWER Fabric is a unified software-as-a-service
(SaaS) offering, with all your data stored in a single open format in OneLake.
What is OneLake? - ANSWER It's like OneDrive but for data.
OneLake is Fabric's lake-centric architecture that provides a single, integrated
environment for data professionals and businesses to collaborate on data projects.
Fabric's OneLake architecture facilitates collaboration between data team members
and saves time by eliminating the need to move and copy data between different
systems and teams.
OneLake combines storage locations across different regions and clouds into a
single logical lake, without moving or duplicating data
What is OneCopy? - ANSWER OneCopy is a key component of OneLake that
allows you to read data from a single copy, without moving or duplicating data.
What is OneLake built on top of? - ANSWER Azure Data Lake Storage (ADLS)
OneLake Data format? - ANSWER Data can be stored in any format, including
Delta, Parquet, CSV, JSON, and more.
How is Tabular data stored in Fabric? - ANSWER Fabric will write data in
delta-parquet format and all engines interact with the format seamlessly.
Shortcuts in OneLake - ANSWER are embedded references within OneLake that
point to other files or storage locations.
Shortcuts allow you to quickly source your existing cloud data without having to
copy it, and enables Fabric experiences to derive data from the same source to
always be in sync.
Shortcuts enable you to integrate data into your lakehouse while keeping it stored
,in external storage.
Microsoft Fabric admin - ANSWER the management of the organization-wide
settings that control how Microsoft Fabric works. Users that are assigned to admin
roles configure, monitor, and provision organizational resources.
Benefits of Using Microsoft Fabric - ANSWER Fabric removes data silos and the
need for access to multiple systems, enhancing collaboration between data
professionals.
data professionals work together in the same SaaS product to better understand and
identify needs of each other and the business. Further, data analysts now have
greater context and ability to transform data further upstream with data factory.
allows you to quickly and easily provision and run any type of workload or job
without needing pre-approval or planning
means that you can scale resources up or down as needed, and be more agile and
responsive to changing business needs
Fabric is bringing the low-to-no-code concept, functionality, and approach that has
successfully empowered many users on the Power Platform to its own SaaS
offering.
What makes Fabric unique is that it brings these capabilities together in a single,
SaaS integrated experience without the need for access to Azure resources.
Permissions required to enable Fabric - ANSWER either:
Fabric admin
Power Platform admin
Microsoft 365 admin
can be enabled at the tenant level or capacity level, meaning that it can be enabled
for the entire organization or for specific groups of users.
Microsoft Fabric Lakehouse - ANSWER A Lakehouse presents as a database and is
built on top of a data lake using Delta format tables.
Lakehouses combine the SQL-based analytical capabilities of a relational data
,warehouse and the flexibility and scalability of a data lake.
Lakehouses store all data formats and can be used with various analytics tools and
programming languages.
As cloud-based solutions, lakehouses can scale automatically and provide high
availability and disaster recovery.
Benefits of a Lakehouse - ANSWER support ACID (Atomicity, Consistency,
Isolation, Durability) transactions through Delta Lake formatted tables for data
consistency and integrity.
use Spark and SQL engines to process large-scale data and support machine
learning or predictive modeling analytics.
data is organized in a schema-on-read format, which means you define the schema
as needed rather than having a predefined schema.
are a single location for data engineers, data scientists, and data analysts to access
and use data.
What 3 items does each lakehouse produce? - ANSWER Lakehouse
Semantic Model
SQL analytics Endpoint
SQL analytics endpoint - ANSWER You can analyze data in Delta tables using
T-SQL language, save functions, generate views, and apply SQL security. To
access SQL analytics endpoint, you select a corresponding item in the workspace
view or switch to SQL analytics endpoint mode in Lakehouse explorer.
Ways to load data into a Fabric lakehouse - ANSWER Upload: Upload local files
or folders to the lakehouse. You can then explore and process the file data, and load
the results into tables.
Dataflows (Gen2): Import and transform data from a range of sources using Power
Query Online, and load it directly into a table in the lakehouse.
Notebooks: Use notebooks in Fabric to ingest and transform data, and load it into
tables or files in the lakehouse.
, Data Factory pipelines: Copy data and orchestrate data processing activities,
loading the results into tables or files in the lakehouse.
Where can shortcuts be created? - ANSWER can be created in both Lakehouses
and KQL databases, and appear as a folder in the lake. This allows Spark, SQL,
Real-Time intelligence and Analysis Services to all utilize shortcuts when querying
data.
Tools and techniques to explore and transform data in Fabric - ANSWER Apache
Spark
SQL Analytics Endpoint
Dataflows
Data Pipelines
What is Apache Spark? - ANSWER is a distributed data processing framework that
enables large-scale data analytics by coordinating work across multiple processing
nodes in a cluster.
Put more simply, Spark uses a "divide and conquer" approach to processing large
volumes of data quickly by distributing the work across multiple computers.
Apache Spark Process - ANSWER The process of distributing tasks and collating
results is handled for you by Spark. You submit a data processing job in the form of
some code that initiates a driver program, which uses a cluster management object
called the SparkContext to manage the distribution of processing in the Spark
cluster.
In most cases, these details are abstracted - you just need to write the code required
to perform the data operations you need.