This topic explains how to set up the Snappy libraries so that the DP CLI can process Hive tables with Snappy compression.
By default, the DP CLI cannot successfully process Hive tables with Snappy compression. The reason is that the required Hadoop native libraries are not available in the library path of the JVM. Therefore, you must the Hadoop native libraries' path to the Workflow Manager's sparkContext.properties file, which is located in the $BDD_HOME/workflowmanager/dp/config directory. For information on this configuration file, see Spark configuration.
To configure workflows to use the Snappy libraries:
Once the paths are added to the Workflow Manager's properties file, all subsequent DP workflows should be able to process Hive tables with Snappy compression.