Repository Metadata Analyzer
Cache¶
quarkus.cache.caffeine."metaAnalyzer".expire-after-write¶
Defines the time-to-live of cache entries.
| Required | true |
|---|---|
| Type | duration |
| Default | PT2H |
| ENV | QUARKUS_CACHE_CAFFEINE__METAANALYZER__EXPIRE_AFTER_WRITE |
quarkus.cache.caffeine."metaAnalyzer".initial-capacity¶
Defines the initial capacity of the cache.
| Required | true |
|---|---|
| Type | integer |
| Default | 5 |
| ENV | QUARKUS_CACHE_CAFFEINE__METAANALYZER__INITIAL_CAPACITY |
quarkus.cache.enabled¶
Defines whether caching of analysis results shall be enabled.
| Required | true |
|---|---|
| Type | boolean |
| Default | true |
| ENV | QUARKUS_CACHE_ENABLED |
Database¶
quarkus.datasource.jdbc.url¶
Specifies the JDBC URL to use when connecting to the database.
| Required | true |
|---|---|
| Type | string |
| Default | null |
| ENV | QUARKUS_DATASOURCE_JDBC_URL |
quarkus.datasource.password¶
Specifies the password to use when authenticating to the database.
| Required | true |
|---|---|
| Type | string |
| Default | null |
| ENV | QUARKUS_DATASOURCE_PASSWORD |
quarkus.datasource.username¶
Specifies the username to use when authenticating to the database.
| Required | true |
|---|---|
| Type | string |
| Default | null |
| ENV | QUARKUS_DATASOURCE_USERNAME |
General¶
secret.key.path¶
Defines the path to the secret key to be used for data encryption and decryption.
| Required | false |
|---|---|
| Type | string |
| Default | ~/.dependency-track/keys/secret.key |
| ENV | SECRET_KEY_PATH |
HTTP¶
quarkus.http.port¶
HTTP port to listen on. Application metrics will be available via this port.
| Required | false |
|---|---|
| Type | integer |
| Default | 8091 |
| ENV | QUARKUS_HTTP_PORT |
Kafka¶
dt.kafka.topic.prefix¶
Defines an optional prefix to assume for all Kafka topics the application consumes from, or produces to. The prefix will also be prepended to the application's consumer group ID.
| Required | false |
|---|---|
| Type | string |
| Default | null |
| Example | acme- |
| ENV | DT_KAFKA_TOPIC_PREFIX |
kafka-streams.auto.offset.reset¶
Refer to https://kafka.apache.org/documentation/#consumerconfigs_auto.offset.reset for details.
| Required | false |
|---|---|
| Type | enum |
| Valid Values | [earliest, latest, none] |
| Default | earliest |
| ENV | KAFKA_STREAMS_AUTO_OFFSET_RESET |
kafka-streams.commit.interval.ms¶
Defines the interval in milliseconds at which consumer offsets are committed to the Kafka brokers. The Kafka default of 30s has been modified to 5s.
Refer to https://kafka.apache.org/documentation/#streamsconfigs_commit.interval.ms for details.
| Required | false |
|---|---|
| Type | integer |
| Default | 5000 |
| ENV | KAFKA_STREAMS_COMMIT_INTERVAL_MS |
kafka-streams.exception.thresholds.deserialization.count¶
Defines the threshold for records failing to be deserialized within kafka-streams.exception.thresholds.deserialization.interval. Deserialization failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
| Required | true |
|---|---|
| Type | integer |
| Default | 5 |
| ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_DESERIALIZATION_COUNT |
kafka-streams.exception.thresholds.deserialization.interval¶
Defines the interval within which up to kafka-streams.exception.thresholds.deserialization.count records are allowed to fail deserialization. Deserialization failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
| Required | true |
|---|---|
| Type | duration |
| Default | PT30M |
| ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_DESERIALIZATION_INTERVAL |
kafka-streams.exception.thresholds.processing.count¶
Defines the threshold for records failing to be processed within kafka-streams.exception.thresholds.processing.interval. Processing failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
| Required | true |
|---|---|
| Type | integer |
| Default | 50 |
| ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PROCESSING_COUNT |
kafka-streams.exception.thresholds.processing.interval¶
Defines the interval within which up to kafka-streams.exception.thresholds.processing.count records are allowed to fail processing. Processing failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
| Required | true |
|---|---|
| Type | duration |
| Default | PT30M |
| ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PROCESSING_INTERVAL |
kafka-streams.exception.thresholds.production.count¶
Defines the threshold for records failing to be produced within kafka-streams.exception.thresholds.production.interval. Production failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
| Required | true |
|---|---|
| Type | integer |
| Default | 5 |
| ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PRODUCTION_COUNT |
kafka-streams.exception.thresholds.production.interval¶
Defines the interval within which up to kafka-streams.exception.thresholds.production.count records are allowed to fail producing. Production failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
| Required | true |
|---|---|
| Type | duration |
| Default | PT30M |
| ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PRODUCTION_INTERVAL |
kafka-streams.metrics.recording.level¶
Refer to https://kafka.apache.org/documentation/#adminclientconfigs_metrics.recording.level for details.
| Required | false |
|---|---|
| Type | enum |
| Valid Values | [INFO, DEBUG, TRACE] |
| Default | DEBUG |
| ENV | KAFKA_STREAMS_METRICS_RECORDING_LEVEL |
kafka-streams.num.stream.threads¶
Refer to https://kafka.apache.org/documentation/#streamsconfigs_num.stream.threads for details.
| Required | true |
|---|---|
| Type | integer |
| Default | 3 |
| ENV | KAFKA_STREAMS_NUM_STREAM_THREADS |
kafka.bootstrap.servers¶
Comma-separated list of brokers to use for establishing the initial connection to the Kafka cluster.
Refer to https://kafka.apache.org/documentation/#consumerconfigs_bootstrap.servers for details.
| Required | true |
|---|---|
| Type | string |
| Default | null |
| Example | broker-01.acme.com:9092,broker-02.acme.com:9092 |
| ENV | KAFKA_BOOTSTRAP_SERVERS |
quarkus.kafka-streams.application-id¶
Defines the ID to uniquely identify this application in the Kafka cluster.
Refer to https://kafka.apache.org/documentation/#streamsconfigs_application.id for details.
| Required | false |
|---|---|
| Type | string |
| Default | ${dt.kafka.topic.prefix}hyades-repository-meta-analyzer |
| ENV | QUARKUS_KAFKA_STREAMS_APPLICATION_ID |
Observability¶
quarkus.log.console.json¶
Defines whether logs should be written in JSON format.
| Required | false |
|---|---|
| Type | boolean |
| Default | false |
| ENV | QUARKUS_LOG_CONSOLE_JSON |