Labour Day Sale - Limited Time 60% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: 575363r9

Welcome To DumpsPedia

Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0 Sample Questions Answers

Questions 4

The code block shown below should set the number of partitions that Spark uses when shuffling data for joins or aggregations to 100. Choose the answer that correctly fills the blanks in the code

block to accomplish this.

spark.sql.shuffle.partitions

__1__.__2__.__3__(__4__, 100)

Options:

A.

1. spark

2. conf

3. set

4. "spark.sql.shuffle.partitions"

B.

1. pyspark

2. config

3. set

4. spark.shuffle.partitions

C.

1. spark

2. conf

3. get

4. "spark.sql.shuffle.partitions"

D.

1. pyspark

2. config

3. set

4. "spark.sql.shuffle.partitions"

E.

1. spark

2. conf

3. set

4. "spark.sql.aggregate.partitions"

Buy Now
Questions 5

Which of the following describes a narrow transformation?

Options:

A.

narrow transformation is an operation in which data is exchanged across partitions.

B.

A narrow transformation is a process in which data from multiple RDDs is used.

C.

A narrow transformation is a process in which 32-bit float variables are cast to smaller float variables, like 16-bit or 8-bit float variables.

D.

A narrow transformation is an operation in which data is exchanged across the cluster.

E.

A narrow transformation is an operation in which no data is exchanged across the cluster.

Buy Now
Questions 6

Which of the following statements about RDDs is incorrect?

Options:

A.

An RDD consists of a single partition.

B.

The high-level DataFrame API is built on top of the low-level RDD API.

C.

RDDs are immutable.

D.

RDD stands for Resilient Distributed Dataset.

E.

RDDs are great for precisely instructing Spark on how to do a query.

Buy Now
Questions 7

Which of the following statements about Spark's DataFrames is incorrect?

Options:

A.

Spark's DataFrames are immutable.

B.

Spark's DataFrames are equal to Python's DataFrames.

C.

Data in DataFrames is organized into named columns.

D.

RDDs are at the core of DataFrames.

E.

The data in DataFrames may be split into multiple chunks.

Buy Now
Questions 8

Which of the following describes a difference between Spark's cluster and client execution modes?

Options:

A.

In cluster mode, the cluster manager resides on a worker node, while it resides on an edge node in client mode.

B.

In cluster mode, executor processes run on worker nodes, while they run on gateway nodes in client mode.

C.

In cluster mode, the driver resides on a worker node, while it resides on an edge node in client mode.

D.

In cluster mode, a gateway machine hosts the driver, while it is co-located with the executor in client mode.

E.

In cluster mode, the Spark driver is not co-located with the cluster manager, while it is co-located in client mode.

Buy Now
Questions 9

Which of the following DataFrame methods is classified as a transformation?

Options:

A.

DataFrame.count()

B.

DataFrame.show()

C.

DataFrame.select()

D.

DataFrame.foreach()

E.

DataFrame.first()

Buy Now
Questions 10

Which of the following statements about broadcast variables is correct?

Options:

A.

Broadcast variables are serialized with every single task.

B.

Broadcast variables are commonly used for tables that do not fit into memory.

C.

Broadcast variables are immutable.

D.

Broadcast variables are occasionally dynamically updated on a per-task basis.

E.

Broadcast variables are local to the worker node and not shared across the cluster.

Buy Now
Questions 11

The code block displayed below contains an error. The code block should write DataFrame transactionsDf as a parquet file to location filePath after partitioning it on column storeId. Find the error.

Code block:

transactionsDf.write.partitionOn("storeId").parquet(filePath)

Options:

A.

The partitioning column as well as the file path should be passed to the write() method of DataFrame transactionsDf directly and not as appended commands as in the code block.

B.

The partitionOn method should be called before the write method.

C.

The operator should use the mode() option to configure the DataFrameWriter so that it replaces any existing files at location filePath.

D.

Column storeId should be wrapped in a col() operator.

E.

No method partitionOn() exists for the DataFrame class, partitionBy() should be used instead.

Buy Now
Questions 12

Which of the following code blocks performs an inner join between DataFrame itemsDf and DataFrame transactionsDf, using columns itemId and transactionId as join keys, respectively?

Options:

A.

itemsDf.join(transactionsDf, "inner", itemsDf.itemId == transactionsDf.transactionId)

B.

itemsDf.join(transactionsDf, itemId == transactionId)

C.

itemsDf.join(transactionsDf, itemsDf.itemId == transactionsDf.transactionId, "inner")

D.

itemsDf.join(transactionsDf, "itemsDf.itemId == transactionsDf.transactionId", "inner")

E.

itemsDf.join(transactionsDf, col(itemsDf.itemId) == col(transactionsDf.transactionId))

Buy Now
Questions 13

Which of the following statements about lazy evaluation is incorrect?

Options:

A.

Predicate pushdown is a feature resulting from lazy evaluation.

B.

Execution is triggered by transformations.

C.

Spark will fail a job only during execution, but not during definition.

D.

Accumulators do not change the lazy evaluation model of Spark.

E.

Lineages allow Spark to coalesce transformations into stages

Buy Now
Questions 14

Which of the following code blocks prints out in how many rows the expression Inc. appears in the string-type column supplier of DataFrame itemsDf?

Options:

A.

1.counter = 0

2.

3.for index, row in itemsDf.iterrows():

4. if 'Inc.' in row['supplier']:

5. counter = counter + 1

6.

7.print(counter)

B.

1.counter = 0

2.

3.def count(x):

4. if 'Inc.' in x['supplier']:

5. counter = counter + 1

6.

7.itemsDf.foreach(count)

8.print(counter)

C.

print(itemsDf.foreach(lambda x: 'Inc.' in x))

D.

print(itemsDf.foreach(lambda x: 'Inc.' in x).sum())

E.

1.accum=sc.accumulator(0)

2.

3.def check_if_inc_in_supplier(row):

4. if 'Inc.' in row['supplier']:

5. accum.add(1)

6.

7.itemsDf.foreach(check_if_inc_in_supplier)

8.print(accum.value)

Buy Now
Questions 15

Which of the following code blocks performs an inner join of DataFrames transactionsDf and itemsDf on columns productId and itemId, respectively, excluding columns value and storeId from

DataFrame transactionsDf and column attributes from DataFrame itemsDf?

Options:

A.

transactionsDf.drop('value', 'storeId').join(itemsDf.select('attributes'), transactionsDf.productId==itemsDf.itemId)

B.

1.transactionsDf.createOrReplaceTempView('transactionsDf')

2.itemsDf.createOrReplaceTempView('itemsDf')

3.

4.spark.sql("SELECT -value, -storeId FROM transactionsDf INNER JOIN itemsDf ON productId==itemId").drop("attributes")

C.

transactionsDf.drop("value", "storeId").join(itemsDf.drop("attributes"), "transactionsDf.productId==itemsDf.itemId")

D.

1.transactionsDf \

2. .drop(col('value'), col('storeId')) \

3. .join(itemsDf.drop(col('attributes')), col('productId')==col('itemId'))

E.

1.transactionsDf.createOrReplaceTempView('transactionsDf')

2.itemsDf.createOrReplaceTempView('itemsDf')

3.

4.statement = """

5.SELECT * FROM transactionsDf

6.INNER JOIN itemsDf

7.ON transactionsDf.productId==itemsDf.itemId

8."""

9.spark.sql(statement).drop("value", "storeId", "attributes")

Buy Now
Questions 16

Which of the following code blocks displays various aggregated statistics of all columns in DataFrame transactionsDf, including the standard deviation and minimum of values in each column?

Options:

A.

transactionsDf.summary()

B.

transactionsDf.agg("count", "mean", "stddev", "25%", "50%", "75%", "min")

C.

transactionsDf.summary("count", "mean", "stddev", "25%", "50%", "75%", "max").show()

D.

transactionsDf.agg("count", "mean", "stddev", "25%", "50%", "75%", "min").show()

E.

transactionsDf.summary().show()

Buy Now
Questions 17

Which of the following describes a way for resizing a DataFrame from 16 to 8 partitions in the most efficient way?

Options:

A.

Use operation DataFrame.repartition(8) to shuffle the DataFrame and reduce the number of partitions.

B.

Use operation DataFrame.coalesce(8) to fully shuffle the DataFrame and reduce the number of partitions.

C.

Use a narrow transformation to reduce the number of partitions.

D.

Use a wide transformation to reduce the number of partitions.

Use operation DataFrame.coalesce(0.5) to halve the number of partitions in the DataFrame.

Buy Now
Questions 18

Which of the following code blocks produces the following output, given DataFrame transactionsDf?

Output:

1.root

2. |-- transactionId: integer (nullable = true)

3. |-- predError: integer (nullable = true)

4. |-- value: integer (nullable = true)

5. |-- storeId: integer (nullable = true)

6. |-- productId: integer (nullable = true)

7. |-- f: integer (nullable = true)

DataFrame transactionsDf:

1.+-------------+---------+-----+-------+---------+----+

2.|transactionId|predError|value|storeId|productId| f|

3.+-------------+---------+-----+-------+---------+----+

4.| 1| 3| 4| 25| 1|null|

5.| 2| 6| 7| 2| 2|null|

6.| 3| 3| null| 25| 3|null|

7.+-------------+---------+-----+-------+---------+----+

Options:

A.

transactionsDf.schema.print()

B.

transactionsDf.rdd.printSchema()

C.

transactionsDf.rdd.formatSchema()

D.

transactionsDf.printSchema()

E.

print(transactionsDf.schema)

Buy Now
Questions 19

Which of the following statements about Spark's configuration properties is incorrect?

Options:

A.

The maximum number of tasks that an executor can process at the same time is controlled by the spark.task.cpus property.

B.

The maximum number of tasks that an executor can process at the same time is controlled by the spark.executor.cores property.

C.

The default value for spark.sql.autoBroadcastJoinThreshold is 10MB.

D.

The default number of partitions to use when shuffling data for joins or aggregations is 300.

E.

The default number of partitions returned from certain transformations can be controlled by the spark.default.parallelism property.

Buy Now
Questions 20

The code block shown below should write DataFrame transactionsDf as a parquet file to path storeDir, using brotli compression and replacing any previously existing file. Choose the answer that

correctly fills the blanks in the code block to accomplish this.

transactionsDf.__1__.format("parquet").__2__(__3__).option(__4__, "brotli").__5__(storeDir)

Options:

A.

1. save

2. mode

3. "ignore"

4. "compression"

5. path

B.

1. store

2. with

3. "replacement"

4. "compression"

5. path

C.

1. write

2. mode

3. "overwrite"

4. "compression"

5. save

(Correct)

D.

1. save

2. mode

3. "replace"

4. "compression"

5. path

E.

1. write

2. mode

3. "overwrite"

4. compression

5. parquet

Buy Now
Questions 21

Which of the following code blocks writes DataFrame itemsDf to disk at storage location filePath, making sure to substitute any existing data at that location?

Options:

A.

itemsDf.write.mode("overwrite").parquet(filePath)

B.

itemsDf.write.option("parquet").mode("overwrite").path(filePath)

C.

itemsDf.write(filePath, mode="overwrite")

D.

itemsDf.write.mode("overwrite").path(filePath)

E.

itemsDf.write().parquet(filePath, mode="overwrite")

Buy Now
Questions 22

The code block displayed below contains an error. The code block should arrange the rows of DataFrame transactionsDf using information from two columns in an ordered fashion, arranging first by

column value, showing smaller numbers at the top and greater numbers at the bottom, and then by column predError, for which all values should be arranged in the inverse way of the order of items

in column value. Find the error.

Code block:

transactionsDf.orderBy('value', asc_nulls_first(col('predError')))

Options:

A.

Two orderBy statements with calls to the individual columns should be chained, instead of having both columns in one orderBy statement.

B.

Column value should be wrapped by the col() operator.

C.

Column predError should be sorted in a descending way, putting nulls last.

D.

Column predError should be sorted by desc_nulls_first() instead.

E.

Instead of orderBy, sort should be used.

Buy Now
Questions 23

Which of the following describes characteristics of the Spark UI?

Options:

A.

Via the Spark UI, workloads can be manually distributed across executors.

B.

Via the Spark UI, stage execution speed can be modified.

C.

The Scheduler tab shows how jobs that are run in parallel by multiple users are distributed across the cluster.

D.

There is a place in the Spark UI that shows the property spark.executor.memory.

E.

Some of the tabs in the Spark UI are named Jobs, Stages, Storage, DAGs, Executors, and SQL.

Buy Now
Questions 24

The code block shown below should return a copy of DataFrame transactionsDf without columns value and productId and with an additional column associateId that has the value 5. Choose the

answer that correctly fills the blanks in the code block to accomplish this.

transactionsDf.__1__(__2__, __3__).__4__(__5__, 'value')

Options:

A.

1. withColumn

2. 'associateId'

3. 5

4. remove

5. 'productId'

B.

1. withNewColumn

2. associateId

3. lit(5)

4. drop

5. productId

C.

1. withColumn

2. 'associateId'

3. lit(5)

4. drop

5. 'productId'

D.

1. withColumnRenamed

2. 'associateId'

3. 5

4. drop

5. 'productId'

E.

1. withColumn

2. col(associateId)

3. lit(5)

4. drop

5. col(productId)

Buy Now
Questions 25

Which of the following code blocks returns a new DataFrame in which column attributes of DataFrame itemsDf is renamed to feature0 and column supplier to feature1?

Options:

A.

itemsDf.withColumnRenamed(attributes, feature0).withColumnRenamed(supplier, feature1)

B.

1.itemsDf.withColumnRenamed("attributes", "feature0")

2.itemsDf.withColumnRenamed("supplier", "feature1")

C.

itemsDf.withColumnRenamed(col("attributes"), col("feature0"), col("supplier"), col("feature1"))

D.

itemsDf.withColumnRenamed("attributes", "feature0").withColumnRenamed("supplier", "feature1")

E.

itemsDf.withColumn("attributes", "feature0").withColumn("supplier", "feature1")

Buy Now
Questions 26

The code block shown below should return an exact copy of DataFrame transactionsDf that does not include rows in which values in column storeId have the value 25. Choose the answer that

correctly fills the blanks in the code block to accomplish this.

Options:

A.

transactionsDf.remove(transactionsDf.storeId==25)

B.

transactionsDf.where(transactionsDf.storeId!=25)

C.

transactionsDf.filter(transactionsDf.storeId==25)

D.

transactionsDf.drop(transactionsDf.storeId==25)

E.

transactionsDf.select(transactionsDf.storeId!=25)

Buy Now
Questions 27

The code block displayed below contains an error. The code block should configure Spark so that DataFrames up to a size of 20 MB will be broadcast to all worker nodes when performing a join.

Find the error.

Code block:

Options:

A.

spark.conf.set("spark.sql.autoBroadcastJoinThreshold", 20)

B.

Spark will only broadcast DataFrames that are much smaller than the default value.

C.

The correct option to write configurations is through spark.config and not spark.conf.

D.

Spark will only apply the limit to threshold joins and not to other joins.

E.

The passed limit has the wrong variable type.

F.

The command is evaluated lazily and needs to be followed by an action.

Buy Now
Exam Code: Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0
Exam Name: Databricks Certified Associate Developer for Apache Spark 3.0 Exam
Last Update: Apr 26, 2024
Questions: 180
$64  $159.99
$48  $119.99
$40  $99.99
buy now Databricks-Certified-Associate-Developer-for-Apache-Spark-3.0