
hadoop linkedin assessment answers
1. Most Apache Hadoop committers' work is done at which commercial company?
- Microsoft
- Amazon
- Cloudera
2. What custom obiect should you implement to reduce I0 in MapReduce?
- Comparator
- Reducer
- Mapper
- Combiner
3. Hadoop systems are ______ RDBMS systems.
- not used with
- substitutes for
- replacements for
- additions to
4. For custom data types in a Mapper, vou must write and test which of your own objects?
- DefaultComparator
- Mapper
- MapCombiner
- RawComparator
5. Which is not a valid input format for a MapReduce job?
- CompositelnputFormat
- TextInputFormat
- FileReader
- RecordReader
6. What is the output of the Reducer?
- a set of <key, value> pairs
- an update to the input file
- a single, combined list
- a relational table
7. When you implement a custom Writable, you must also define which of these obiects?
- a filter policy
- a sort policy
- a combiner policy
- a compression policy
8. If you see org.apache.hadoop.mapred, which version of MapReduce are you working with?
- 3.x
- 2.x
- 1.x
- 0.x
9. Which statement should you add to improve the performance of the following query?
SELECT
c.id,
c. name,
c.email preferences.categories.surveys
FROM customers
C:
- FILTER
- SORT
- SUB-SELECT
- GROUP BY
10. In Hadoop MapReduce iob code, what must be static?
- configuration
- Reducer
- Mapper and Reducer
- Mapper
11. In a MapReduce job, which phase runs after the Map phase completes?
- Map2
- Combiner
- Reducer
- Shuffle and Sort
13. A distributed cache file path can originate from what location?
- hdfs
- http
- hdfs or http
- hdfs or top
14. You can optimize Hive queries using which method?
- a primary key index
- column-based statistics
- secondary indices
- summary statistics
15. Base works with which type of schema enforcement?
- schema on read
- external schema
- no schema
- schema on write