Creating Apache Kafka Input Data Source

Allows Panopticon Streams to subscribe to Kafka topics on an external cluster.

 

  1. In the New Data Source page, select Input > Kafka in the Connector drop-down list.
  2. Enter the connection details:
    Property Description

    Bootstrap Server

    List of  host/port pairs of Kafka servers used to bootstrap connections to a Kafka cluster.

    By default, the value is localhost:9092,broker:29092. However, this can be overridden by specifying another bootstrap server in the External Settings text box (as specified in step 3).

    Schema Registry Host

    Where the Schema Registry is located. This can be in a different location from the Kafka cluster.

    Schema Registry Port

    The port number of the schema registry which provides the serving layer for the metadata. Default is 8081.

     

  3. Enter the External Settings to support authentication (i.e., username and password). Note that if the bootstrap server is not secure, then there is no need to authenticate and you may leave this text box blank.

    Below is an example of system settings for an SASL authentication:

  4. Click Fetch Topics to populate the Topic drop-down list.

    By default, the Hide Internal Topics toggle button is enabled and the Avro message type is selected.

    Tap the slider to turn it off. The internal Kafka topics are also displayed in the drop-down list.

    Click the drop-down list to search and select the desired topic.

    For non-Avro topics, select the Message Type: Fix, JSON, Text,  XML, or Protobuf.

    •  If Text is selected, confirm the Text Qualifier, Column Delimiter, and if the first row of the message includes column headings.

      Property Description

      Text Qualifier

      Specifies if fields are enclosed by text qualifiers, and if present to ignore any column delimiters within these text qualifiers.

      Column Delimiter

      Specifies the column delimiter to be used when parsing the text file.

      First Row Headings

      Determines if the first row should specify the retrieved column headings, and not be used in data discovery.

       

    • If JSON is selected, enter the Record Path which allows the identification of multiple records within the JSON document (e.g., myroot.items.item).

       

      Property Description

      Record Path

      The record path that will be queried by the connector’s path (e.g., myroot.items.item).

       

    •  If Protobuf is selected, confirm the Decimal Separator, and enter the Schema Name and Type Name.  

      Then click    to select the File Descriptor (.desc file) in the Open dialog.

      Property Description

      Schema Name

      The Protobuf schema.

      Type Name

      The message of Protobuf type that will be sent to Kafka.

      File Descriptor

      The FileDescriptorSet which:

      • Is an output of the protocol compiler.

      • Represents a set of .proto files, using the --descriptor_set_out option.

       

  5. Select the From Beginning checkbox to subscribe from the beginning to the latest messages.

    If not selected, you will only be subscribed to the latest messages.

  6. Select either the period (.) or comma (,) as the Decimal Separator.

    NOTE: Prepend 'default:' for the elements falling under default namespace.

  7. Click  to the fetch the schema based on the connection details. This populates the list
    of columns with the data type found from inspecting the first ‘n’ rows of the input data source.
  8. For non-Avro message types, except Protobuf, click to add columns to the Kafka connection that represent sections of the message. Then enter or select:
    Property Description

    Name

    The column name of the source schema.

    Fix Tag/XPath/Json Path

    The Fix Tag/XPath/Json Path of the source schema.

    Type

    The data type of the column. Can be a Text, Numeric, or Time.

    Date Format

    The format when the data type is Time.

    Filter

    Defined parameters that can be used as filter. Only available for Avro, JSON, Text, and XML message types.

    Enabled

    Determines whether the message field should be processed.

     

    NOTE: To parse and format times with higher than millisecond precision, the format string needs to end with a period followed by sequence of upper case S. There can be no additional characters following them.

    For example: yyyy-MM-dd HH:mm:ss.SSSSSS

  9. You can also opt to load or save a copy of the column definition.
  10. Define the Real-time Settings.
  11. Click  . The new data source is added in the Data Sources list.

 

 

(c) 2013-2024 Altair Engineering Inc. All Rights Reserved.

Intellectual Property Rights Notice | Technical Support