7 Using Multimedia Analytics

You can use the multimedia analytics framework in a Big Data environment to perform facial recognition in videos and images.

7.1 About Multimedia Analytics

The multimedia analytics feature of Oracle Big Data Spatial and Graph provides a framework for processing video and image data in Apache Hadoop. The framework enables distributed processing of video and image data. Features of the framework include:

  • APIs to process and analyze video and image data in Apache Hadoop

  • Scalable, high speed processing, leveraging the parallelism of Apache Hadoop

  • Built-in face recognition using OpenCV

  • Ability to install and implement custom video/image processing (for example, license plate recognition) to use the framework to run in Apache Hadoop

  • Ability to work with input data in HDFS, Oracle NoSQL Database, and HBase

The video analysis framework is installed on Oracle Big Data Appliance if Oracle Spatial and Graph is licensed, and you can install it on other Hadoop clusters.

7.2 Face Recognition Using the Multimedia Analytics Framework

The multimedia analytics feature comes with built-in face recognition. Face recognition uses OpenCV libraries, available with the product. This chapter describes using this face recognition functionality.

Face recognition has two steps:

  1. “Training” a model with face images. This step can be run in any Hadoop client or node.

  2. Recognizing faces from input video or images using the training model. This step is a MapReduce job that runs in a Hadoop cluster.

The training process creates a model stored in a file. This file is used as input for face recognition from videos or images.

Topics:

7.2.1 Training to Detect Faces

Training is done using the Java program OrdFaceTrainer, which is part of part of ordhadoop_multimedia_analytics.jar. Inputs to this program are a set of images and a label mapping file that maps images to labels. The output is a training model that is written to a file. (You must not edit this file.)

To train the multimedia analytics feature to detect (recognize) faces, follow these steps.

  1. Create a parent directory and subdirectories to store images that are to be recognized.

    Each subdirectory should contain one or more images of one person. A person can have images in multiple subdirectories, but a subdirectory can have images of only one person. For example, assume that a parent directory named images exists where one subdirectory (d1) contains images of a person named Andrew, and two subdirectories (d2 and d3) contain images of a person named Betty (such as pictures taken at two different times in two different locations). In this example, the directories and their contents might be as follows:

    • images/d1 contains five images of Andrew.

    • images/d2 contains two images of Betty.

    • images/d3 contains four images of Betty.

  2. Create a mapping file that maps image subdirectories to labels.

    A “label” is a numeric ID value to be associated with a person who has images for recognition. For example, Andrew might be assigned the label value 100, and Betty might be assigned the label value 101. Each record (line) in the mapping file must have the following structure:

    <subdirectory>,<label-id>,<label-text>
    

    For example:

    d1,100,Andrew
    d2,101,Betty
    d3,101,Betty
    
  3. Set the required configuration properties:

    oracle.ord.hadoop.ordfacemodel
    oracle.ord.hadoop.ordfacereader
    oracle.ord.hadoop.ordsimplefacereader.dirmap 
    oracle.ord.hadoop.ordsimplefacereader.imagedir
    

    For information about the available properties, see Configuration Properties for Multimedia Analytics.

  4. Set the CLASSPATH. Include the following in the Java CLASSPATH definition. Replace each asterisk (*) with the actual version number.

    $MMA_HOME/lib/ordhadoop-multimedia-analytics.jar
    $MMA_HOME/opencv_3.0.0/opencv-300.jar
    $HADOOP_HOME/hadoop-common-*.jar
    $HADOOP_HOME/hadoop-auth-*.jar
    $HADOOP_HOME/commons-lang*.jar
    $HADOOP_HOME/commons-logging-*.jar
    $HADOOP_HOME/commons-configuration-*.jar
    $HADOOP_HOME/commons-collections-*.jar
    $HADOOP_HOME/guava-*.jar
    $HADOOP_HOME/slf4j-api-*.jar
    $HADOOP_HOME/slf4j-log4j12-*.jar
    $HADOOP_HOME/log4j-*.jar
    $HADOOP_HOME/commons-cli-*.jar
    $HADOOP_HOME/protobuf-java-*.jar
    $HADOOP_HOME/avro-*.jar
    $HADOOP_HOME/hadoop-hdfs-*.jar
    $HADOOP_HOME/hadoop-mapreduce-client-core-*.jar
    
  5. Create the training model. Enter a command in the following general form:

    java -classpath <…> oracle.ord.hadoop.recognizer.OrdFaceTrainer <training_config_file.xml>
    

Note:

$MMA_HOME/example has a set of sample files. It includes scripts for setting the Java CLASSPATH. You can edit the example as needed to create a training model.

7.2.2 Selecting Faces to be Used for Training

Images used to create the training model should contain only the face, with as little extra detail around the face as possible. The following are some examples, showing four images of the same man’s face with different facial expressions.


Description of GUID-EB53C6B0-043C-4D73-90B4-B6F19F5845D0-default.jpg follows
Description of the illustration GUID-EB53C6B0-043C-4D73-90B4-B6F19F5845D0-default.jpg

The selection of images for training is important for accurate matching. The following guidelines apply:

  • The set of images should contain faces with all possible positions and facial movements, for example, closed eyes, smiles, and so on.

  • Try to avoid including images that are very similar.

  • If it is necessary to recognize a person with several backgrounds and light conditions, include images with these backgrounds.

  • The number of images to include depends on the variety of movements and backgrounds expected in the input data.

7.2.3 Detecting Faces in Videos

To detect (recognize) faces in videos, you have the following options for video processing software to transcode video data:

  • Use OrdOpenCVFaceRecognizerMulti as the frame processor, along with any of the frontal face cascade classifiers available with OpenCV.

    Haarcascade_frontalface_alt2.xml is a good place to start. You can experiment with the different cascade classifiers to identify a good fit for your requirements.

  • Use third-party face recognition software.

To perform recognition, follow these steps:

  1. Copy the video files (containing video in which you want to recognize faces) to HDFS.

  2. Copy these required files to a shared location accessible by all nodes in the cluster:

    • Generated training model

    • Mapping file that maps image subdirectories to labels

    • Cascade classifier XML file

  3. Create the configuration file.

    Required configuration parameters:

    • oracle.ord.hadoop.inputtype: Type if input data (video or image).

    • oracle.ord.hadoop.outputtypes: Format of generated results (JSON/text/Image).

    • oracle.ord.hadoop.ordframegrabber: Get a video frame from the video data. You can use the Java classes available with the product or you can provide an implementation for the abstraction.

      • OrdJCodecFrameGrabber is available with the product. This class can be used without any additional steps. See www.jcodec.org for more details on JCodec.

      • OrdFFMPEGFrameGrabber is available with the product. This class requires installation of FFMPEG libraries. See www.ffmpeg.org for more details

    • oracle.ord.hadoop.ordframeprocessor: Processor to use on the video frame to recognize faces. You can use the Java classes available with the product or you can provide an implementation for the abstraction.

    • oracle.ord.hadoop.recognizer.classifier: Cascade classifier XML file.

    • oracle.ord.hadoop.recognizer.labelnamefile: Mapping file that maps image subdirectories to labels.

    Optional configuration parameters:

    • oracle.ord.hadoop.frameinterval: Time interval (number of seconds) between frames that are processed. Default: 1.

    • oracle.ord.hadoop.numofsplits: Number of splits of the video file on the Hadoop cluster, with one split analyzed on each node of the Hadoop cluster. Default: 1.

    • oracle.ord.hadoop.recognizer.cascadeclassifier.scalefactor: Scale factor to be used for matching images used in training with faces identified in video frames or images. Default: 1.1 (no scaling)

    • oracle.ord.hadoop.recognizer.cascadeclassifier.minneighbor: Determines size of the sliding window to detect face in video frame or image. Default: 1.

    • oracle.ord.hadoop.recognizer.cascadeclassifier.flags: Determines type of face detection.

    • oracle.ord.hadoop.recognizer.cascadeclassifier.minsize: Smallest bounding box used to detect a face.

    • oracle.ord.hadoop.recognizer.cascadeclassifier.maxsize: Largest bounding box used to detect a face.

    • oracle.ord.hadoop.recognizer.cascadeclassifier.maxconfidence: Maximum allowable distance between the detected face and a face in the model.

    • oracle.ord.hadoop.ordframeprocessor.k2: Key class for the implemented class for OrdFrameProcessor.

    • oracle.ord.hadoop.ordframeprocessor.v2: Value class for the implemented class for OrdFrameProcessor.

  4. Set the HADOOP_CLASSPATH.

    Ensure that HADOOP_CLASSPATH includes the files listed in Training to Detect Faces

  5. Run the Hadoop job to recognize faces. Enter a command in the following format:

    $ hadoop jar $MMA_HOME/lib/orhadoop-multimedia-analytics.jar -conf <conf file> <hdfs_input_directory_containing_video_data> <hdfs_output_directory_to_write_results>
    

The accuracy of detecting faces depends on a variety of factors, including lighting, brightness, orientation of the face, distance of the face from the camera, and clarity of the video or image. You should experiment with the configuration properties to determine the best set of values for your use case. Note that it is always possible to have false positives (identifing objects that are not faces as faces) and false recognitions (wrongly labeling a face).

Note:

$MMA_HOME/example has a set of sample files. It includes scripts for setting the Java CLASSPATH. You can edit as needed to submit a job to detect faces.

7.2.4 Detecting Faces in Images

To detect faces in images, copy the images to HDFS. Specify the following property:

<property>
  <name>oracle.ord.hadoop.inputtype</name>
  <value>image</value>
</property>

7.2.5 Working with Oracle NoSQL Database

Oracle NoSQL Database providesperformance improvements when working with small objects such as images. Images can be stored in Oracle NoSQL Database and accessed by the multimedia analytics framework. If input data is video, then the video must be decoded into frames and the frames stored in an HBase table. HDFS or HBase can be used to store the output of multimedia processing.

The following properties are required when the input is in Oracle NoSQL Database:

  • oracle.ord.hadoop.datasource – Storage option for input data. Specify kvstore if input data is in Oracle NoSQL Database. Default is HDFS.

  • oracle.ord.kvstore.input.name – Name of NoSQL Database storage.

  • oracle.ord.kvstore.input.table – Name of the NoSQL Database table.

  • oracle.ord.kvstore.input.hosts – Hostname and port.

  • oracle.ord.kvstore.input.primarykey – Primary key for accessing records in a table.

  • oracle.ord.hadoop.datasink – Storage option for the output of multimedia analysis. Default is HDFS. Specify HBase to use an HBase table to store the output.

The Oracle NoSQL Database documentation is available at: https://docs.oracle.com/cd/NOSQL/html/index.html

7.2.6 Working with Apache HBase

Apache provides performance improvements when working with small objects such as images. Images can be stored in an HBase table and accessed by the multimedia analytics framework. If input data is video, then the video must be decoded into frames and the frames stored in an HBase table.

The following properties are used when the input or output is an HBase table:

  • oracle.ord.hadoop.datasource – Storage option for input data. Specify HBase if input data is in an HBase table. Default is HDFS.

  • oracle.ord.hbase.input.table – Name of the HBase table containing the input data.

  • oracle.ord.hbase.input.columnfamily – Name of the HBase column family containing the input data.

  • oracle.ord.hbase.input.column – Name of the HBase column containing the input data.

  • oracle.ord.hadoop.datasink – Storage option for the output of multimedia analysis. Specify HBase to use an HBase table to store the output. Default is HDFS.

  • oracle.ord.hbase.output.columnfamily – Name of the HBase column family in the output HBase table.

7.2.7 Examples and Training Materials for Detecting Faces

Several examples and training materials are provided to help you get started detecting faces.

$MMA_HOME contains these directories:

video/ (contains a sample video file in mp4 and avi formats)
facetrain/
analytics/

facetrain/ contains an example for training, facetrain/config/ contains the sample configuration files, and facetrain/faces/ contains images to create the training model and the mapping file that maps labels to images.

runFaceTrainExample.sh is a bash example script to run the training step.

You can create the training model as follows:

$ ./runFaceTrainExample.sh

The training model will be written to ordfacemodel_bigdata.dat.

For detecting faces in videos, analytics/ contains an example for running a Hadoop job to detect faces in the input video file. This directory contains conf/ with configuration files for the example.

You can run the job as follows (includes copying the video file to HDFS directory vinput)

$ ./runFaceDetectionExample.sh

The output of the job will be in the HDFS directory voutput.

For recognizing faces in videos, analytics/ contains an example for running a Hadoop job to recognize faces in the input video file. This directory contains conf/ with configuration files for the example. You can run the job as follows (includes copying the video file to the HDFS directory vinput):

$ ./runFaceRecognizerExample.sh

After the face recognition job, you can display the output images:

$ ./runPlayImagesExample.sh

7.3 Configuration Properties for Multimedia Analytics

The multimedia analytics framework uses the standard methods for specifying configuration properties in the hadooop command. You can use the –conf option to identify configuration files, and the -D option to specify individual properties. This topic presents reference information about the configuration properties.

Some properties are used for specific tasks. For example, training properties include:

  • oracle.ord.hadoop.ordfacereader

  • oracle.ord.hadoop.ordsimplefacereader.imagedir

  • oracle.ord.hadoop.ordsimplefacereader.dirmap

  • oracle.ord.hadoop.ordfacemodel

  • oracle.ord.hadoop.ordfacereaderconfig

The following are the available configuration properties, listed in alphabetical order. For each parameter the parameter name is listed, then information about the parameter.

oracle.ord.hadoop.datasink

String. Storage option for the output of multimedia analysis: HBase to use an HBase table to store the output; otherwise, HDFS. Default value: HDFS. Example:

<property>
  <name>oracle.ord.hadoop.datasink</name>
  <value>hbase</value>
</property>
oracle.ord.hadoop.datasource

String. Storage option for input data: HBase if the input data is in an HBase database; kvstore if the input data is in an Oracle NoSQL Database; otherwise, HDFS. Default value: HDFS: Example:

<property>
  <name>oracle.ord.hadoop.datasource</name>
  <value>hbase</value>
</property>
oracle.ord.hadoop.frameinterval

String.Timestamp interval (in seconds) to extract frames for processing. Allowable values: positive integers and floating point numbers. Default value: 1. Example:

<property>
  <name>oracle.ord.hadoop.frameinterval</name>
  <value>1</value>
</property>
oracle.ord.hadoop.inputformat

Sring. The InputFormat class name in the framework, which represents the input file type in the framework. Default value: oracle.ord.hadoop.OrdVideoInputFormat. Example:

<property>
  <name>oracle.ord.hadoop.inputformat</name>
  <value>oracle.ord.hadoop.OrdVideoInputFormat</value>
</property>
oracle.ord.hadoop.inputtype

String. Type of input data: video or image. Example:

<property>
  <name>oracle.ord.hadoop.inputtype</name>
  <value>video</value>
</property>
oracle.ord.hadoop.numofsplits

Positive integer. Number of the splits of the video files on the Hadoop cluster, with one split able to be analyzed in each node of the Hadoop cluster. Recommended value: the number of nodes/processors in the cluster. Default value: 1. Example:

<property>
   <name>oracle.ord.hadoop.numofsplits</name>
   <value>1</value>
</property>
oracle.ord.hadoop.ordfacemodel

String. Name of the file that stores the model created by the training. Example:

<property>
   <name> oracle.ord.hadoop.ordfacemodel </name>
   <value>ordfacemodel_bigdata.dat</value>
</property>
oracle.ord.hadoop.ordfacereader

String. Name of the Java class that reads images used for training the face recognition model. Example:

<property>
   <name> oracle.ord.hadoop.ordfacereader </name>
   <value> oracle.ord.hadoop.OrdSimpleFaceReader </value>
</property>
oracle.ord.hadoop.ordfacereaderconfig

String. File containing additional configuration properties for the specific application. Example:

<property>
   <name> oracle.ord.hadoop.ordfacereaderconfig </name>
   <value>config/ordsimplefacereader_bigdata.xml</value>
</property>
oracle.ord.hadoop.ordframegrabber

String. Name of the Java class that decodes a video file. This is the implemented class for OrdFrameGrabber, and it is used by the mapper to decode the video file. Available installed implementations with the product: oracle.ord.hadoop.OrdJCodecFrameGrabber (the default) and oracle.ord.hadoop.OrdFFMPEGFrameGrabber (when FFMPEG is installed by the user). You can add custom implementations. Example:

<property>
    <name>oracle.ord.hadoop.ordframegrabber</name>
    <value>oracle.ord.hadoop.OrdJCodecFrameGrabber</value>
</property>
oracle.ord.hadoop.ordframeprocessor

String. Name of the implemented Java class of interface OrdFrameProcessor, which is used by the mapper to process the frame and recognize the object of interest. Default value: oracle.ord.hadoop.mapreduce.OrdOpenCVFaceRecognizerMulti. Example:

<property>
  <name>oracle.ord.hadoop.ordframeprocessor </name>
  <value>oracle.ord.hadoop.mapreduce.OrdOpenCVFaceRecognizerMulti</value>
</property>
oracle.ord.hadoop.ordframeprocessor.k2

String. Java class name, output key class of the implemented class of interface OrdFrameProcessor. Default value: org.apache.hadoop.io.Text. Example:

<property>
  <name>oracle.ord.hadoop.ordframeprocessor.k2</name>
  <value>org.apache.hadoop.io.Text</value>
</property>
oracle.ord.hadoop.ordframeprocessor.v2

String. Java class name, output value class of the implemented class of interface OrdFrameProcessor . Default value: oracle.ord.hadoop.mapreduce.OrdImageWritable. Example:

<property>
  <name>oracle.ord.hadoop.ordframeprocessor.v2 </name>
  <value>oracle.ord.hadoop.mapreduce.OrdImageWritable</value>
</property>
oracle.ord.hadoop.ordoutputprocessor

String. Only only relevant for custom (user-specified) plug-ins: name of the implemented Java class of interface OrdOutputProcessor that processes the key-value pair from the map output in the reduce phase. Example:

<property>
  <name>oracle.ord.hadoop.ordframeprocessor</name>
  <value>mypackage.MyOutputProcessorClass</value>
</property>
oracle.ord.hadoop.ordsimplefacereader.dirmap

String. Mapping file that maps face labels to directory names and face images. Example:

<property>
   <name> oracle.ord.hadoop.ordsimplefacereader.dirmap </name>
   <value>faces/bigdata/dirmap.txt</value>
</property>
oracle.ord.hadoop.ordsimplefacereader.imagedir

String. File system directory containing faces used to create a model. This is typically in a local file system. Example:

<property>
   <name> oracle.ord.hadoop.ordsimplefacereader.imagedir </name>
   <value>faces/bigdata</value>
</property>
oracle.ord.hadoop.outputformat

String. Name of the OutputFormat class, which represents the output file type in the framework. Default value: org.apache.hadoop.mapreduce.lib.output.TextOutputFormat. Example:

<property>
  <name>oracle.ord.hadoop.outputformat</name>
  <value> org.apache.hadoop.mapreduce.lib.output.TextOutputFormat; </value>
</property>
oracle.ord.hadoop.outputtype

String. Format of output that contains face labels of identified faces with the time stamp, location, and confidence of the match: must be json, image, or text. Example:

<property>
  <name>oracle.ord.hadoop.outputtype</name>
  <value>json</value>
</property>
oracle.ord.hadoop.parameterfile

String. File containing additional configuration properties for the specific job. Example:

<property>
  <name>oracle.ord.hadoop.parameterfile </name>
  <value>oracle_multimedia_face_recognition.xml</value>
</property>
oracle.ord.hadoop.recognizer.cascadeclassifier.flags

String. Use this property to select the type of object detection. Must be CASCADE_DO_CANNY_PRUNING, CASCADE_SCALE_IMAGE, CASCADE_FIND_BIGGEST_OBJECT (look only for the largest face), or CASCADE_DO_ROUGH_SEARCH. . Default: CASCADE_SCALE_IMAGE | CASCADE_DO_ROUGH_SEARCH. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.cascadeclassifier.flags</name>
  <value>CASCADE_SCALE_IMAGE</value>
</property>
oracle.ord.hadoop.recognizer.cascadeclassifier.maxconfidence

Floating point value. Specifies how large the distance (difference) between a face in the model and a face in the input data can be. Larger valuse will give more matches but might be less accurate (more false positives). Smaller values will give fewer matches, but be more accurate. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.cascadeclassifier.maxconfidence</name>
  <value>200.0</value>
</property
oracle.ord.hadoop.recognizer.cascadeclassifier.maxsize

String, specifically a pair of values. Specifies the maximum size of the bounding box for the object detected. If the object is close by, the bounding box is larger; if the object is far away, like faces on a beach, the bounding box is smaller. Objects with a larger bounding box than the maximum size are ignored. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.cascadeclassifier.maxsize</name>
  <value>(500,500)</value>
</property>
oracle.ord.hadoop.recognizer.cascadeclassifier.minneighbor

Integer. Determines the size of the sliding window used to detect the object in the input data. Higher values will detect fewer objects but with higher quality. Default value: 1. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.cascadeclassifier.minneighbor</name>
  <value>1</value>
</property>
oracle.ord.hadoop.recognizer.cascadeclassifier.minsize

String, specifically a pair of values. Specifies the minimum size of the bounding box for the object detected. If the object is close by, the bounding box is larger; if the object is far away, like faces on a beach, the bounding box is smaller. Objects with a smaller bounding box than the minimum size are ignored. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.cascadeclassifier.minsize</name>
  <value>(100,100)</value>
</property>
oracle.ord.hadoop.recognizer.cascadeclassifier.scalefactor

Floating pointnumber. Scale factor to be used with the mapping file that maps face labels to directory names and face images. A value of 1.1 means to perform no scaling before comparing faces in the run-time input with images stored in subdirectories during the training process. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.cascadeclassifier.scalefactor</name>
  <value>1.1</value>
</property>
oracle.ord.hadoop.recognizer.classifier

String. XML file containing classifiers for face. The feature can be used with any of the frontal face pre-trained classifiers available with OpenCV. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.classifier</name>
  <value>haarcascade_frontalface_alt2.xml</value>
</property>
oracle.ord.hadoop.recognizer.labelnamefile

String. Mapping file that maps face labels to directory names and face images. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.labelnamefiler</name>
  <value>haarcascade_frontalface_alt2.xml</value>
</property>
oracle.ord.hadoop.recognizer.modelfile

String. File containing the model generated in the training step. The file must be in a shared location, accessible by all cluster nodes. Example:

<property>
  <name> oracle.ord.hadoop.recognizer.modelfile</name>
  <value>myface_model.dat</value>
</property>
oracle.ord.hbase.input.column

String. Name of the HBase column containing the input data. Example:

<property>
  <name>oracle.ord.hbase.input.column</name>
  <value>binary_data</value>
</property>
oracle.ord.hbase.input.columnfamily

String. Name of the HBase column family containing the input data. Example:

<property>
  <name>oracle.ord.hbase.input.columnfamily</name>
  <value>image_data</value>
</property>
oracle.ord.hbase.input.table

String. Name of the HBase table containing the input data. Example:

<property>
  <name>oracle.ord.hbase.input.table</name>
  <value>images</value>
</property>
oracle.ord.hbase.output.columnfamily

String. Name of the HBase column family in the output HBase table. Example:

<property>
  <name>oracle.ord.hbase.output.columnfamily</name>
  <value>face_data</value>
</property>
oracle.ord.hbase.output.table

String. Name of the HBase table for output data. Example:

<property>
  <name>oracle.ord.hbase.output.table</name>
  <value>results</value>
</property>
oracle.ord.kvstore.get.consistency

String. Defines the consistency constraints during read. Read operations can be serviced at a Master or Replica node. The default value of ABSOLUTE ensures the read operation is serviced at the Master node. Example:

<property>
    <name>oracle.ord.kvstore.get.consistency</name>
    <value>absolute</value>
</property>
oracle.ord.kvstore.get.timeout

Number. Upper bound on the time interval for retrieving a chunk of the large object or its associated metadata. A best effort is made not to exceed the specified limit. If zero, the KVStoreConfig.getLOBTimeout(java.util.concurrent.TimeUnit) value is used. Default value is 5. Example:

<property>
    <name>oracle.ord.kvstore.get.timeout</name>
    <value>5</value>
</property>
oracle.ord.kvstore.get.timeunit

String. Unit of the timeout parameter, can be NULL only if timeout is zero. Default value is seconds. Example:

<property>
    <name>oracle.ord.kvstore.get.timeunit</name>
    <value>seconds</value>
</property>
oracle.ord.kvstore.input.hosts

String. Host and port of an active node in Oracle NoSQL Database store. Example:

<property>
    <name>oracle.ord.kvstore.input.hosts</name>
    <value>localhost:5000</value>
</property>
oracle.ord.kvstore.input.lob.prefix and oracle.ord.kvstore.input.lob.suffix

Oracle NoSQL Database uses these to construct the keys used to load and retrieve large objects (LOBs). Default value for oracle.ord.kvstore.input.lob.prefix is lobprefix. Default value for oracle.ord.kvstore.input.lob.suffix is lobsuffix.lob. Example:

<property>
    <name>oracle.ord.kvstore.lob.prefix</name>
    <value>lobprefix</value>
</property>
<property>
    <name>oracle.ord.kvstore.lob.suffix</name>
    <value>lobsuffix.lob</value>
</property>
oracle.ord.kvstore.input.name

String. Name of Oracle NoSQL Database store. The name provided here must be identical to the name used when the store was installed. Example:

<property>
    <name>oracle.ord.kvstore.input.name</name>
    <value>kvstore</value>
</property>
oracle.ord.kvstore.input.primarykey

String. Primary key of the Oracle NoSQL Database table. Example:

<property>
    <name>oracle.ord.kvstore.input.primarykey</name>
    <value>filename</value>
</property>
oracle.ord.kvstore.input.table

String. Name of the Oracle NoSQL Database table containing the input data. Example:

<property>
    <name>oracle.ord.kvstore.input.table</name>
    <value>images</value>
</property>

7.4 Using the Multimedia Analytics Framework with Third-Party Software

You can implement and install custom modules for multimedia decoding and processing.

You can use a custom video decoder in the framework by implementing the abstract class oracle.ord.hadoop.decoder.OrdFrameGrabber. See the Javadoc for additional details. The product includes two implementations of the video decoder that extend OrdFrameGrabber for JCodec and FFMPEG (requires a separate installation of FFMPEG).

You can use custom multimedia analysis in the framework by implementing two abstract classes.

  • oracle.ord.hadoop.mapreduce.OrdFrameProcessor<K1,V1,K2,V2>. The extended class of OrdFrameProcessor is used in the map phase of the MapReduce job that processes the video frames or images. (K1, V1) is the input key-value pair types and (K2, V2) is the output key-value pair type. See the Javadoc for additional details. The product includes an implementation using OpenCV.

  • oracle.ord.hadoop.mapreduce.OrdOutputProcessor<K1,V1,K2,V2>. The extended class of OrdFrameProcessor is used in the reducer phase of the MapReduce job that processes the video frames or images. (K1, V1) is the input key-value pair types and (K2, V2) is the output key-value pair type. See the Javadoc for additional details. Most implementations do not require implementing this class.

An example of framework configuration parameters is available in $MMA_HOME/example/analytics/conf/oracle_multimedia_analysis_framework.xml.

7.5 Displaying Images in Output

If the output is displayed as images, oracle.ord.hadoop.OrdPlayImages can be used to display all the images in the output HDFS directory. This will display the image frames marked with labels for identified faces. For example:

$ java oracle.ord.hadoop.demo.OrdPlayImages –hadoop_conf_dir $HADOOP_CONF_DIR –image_file_dir voutput