Mirror Service
Datasource
mirror.datasource.osv.alias-sync-enabled
Defines whether vulnerability aliases should be parsed from OSV.
Required | false |
---|---|
Type | boolean |
Default | false |
ENV | MIRROR_DATASOURCE_OSV_ALIAS_SYNC_ENABLED |
mirror.datasource.osv.base-url
Defines the URL of the OSV storage bucket.
Required | false |
---|---|
Type | string |
Default | https://osv-vulnerabilities.storage.googleapis.com |
ENV | MIRROR_DATASOURCE_OSV_BASE_URL |
HTTP
quarkus.http.port
HTTP port to listen on. Application metrics will be available via this port.
Required | false |
---|---|
Type | integer |
Default | 8093 |
ENV | QUARKUS_HTTP_PORT |
Kafka
kafka-streams.commit.interval.ms
Defines the interval in milliseconds at which consumer offsets are committed to the Kafka brokers. The Kafka default of 30s
has been modified to 5s
.
Refer to https://kafka.apache.org/documentation/#streamsconfigs_commit.interval.ms for details.
Required | false |
---|---|
Type | integer |
Default | 5000 |
ENV | KAFKA_STREAMS_COMMIT_INTERVAL_MS |
kafka-streams.exception.thresholds.deserialization.count
Defines the threshold for records failing to be deserialized within kafka-streams.exception.thresholds.deserialization.interval
. Deserialization failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
Required | true |
---|---|
Type | integer |
Default | 5 |
ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_DESERIALIZATION_COUNT |
kafka-streams.exception.thresholds.deserialization.interval
Defines the interval within which up to kafka-streams.exception.thresholds.deserialization.count
records are allowed to fail deserialization. Deserialization failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
Required | true |
---|---|
Type | duration |
Default | PT30M |
ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_DESERIALIZATION_INTERVAL |
kafka-streams.exception.thresholds.processing.count
Defines the threshold for records failing to be processed within kafka-streams.exception.thresholds.processing.interval
. Processing failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
Required | true |
---|---|
Type | integer |
Default | 50 |
ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PROCESSING_COUNT |
kafka-streams.exception.thresholds.processing.interval
Defines the interval within which up to kafka-streams.exception.thresholds.processing.count
records are allowed to fail processing. Processing failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
Required | true |
---|---|
Type | duration |
Default | PT30M |
ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PROCESSING_INTERVAL |
kafka-streams.exception.thresholds.production.count
Defines the threshold for records failing to be produced within kafka-streams.exception.thresholds.production.interval
. Production failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
Required | true |
---|---|
Type | integer |
Default | 5 |
ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PRODUCTION_COUNT |
kafka-streams.exception.thresholds.production.interval
Defines the interval within which up to kafka-streams.exception.thresholds.production.count
records are allowed to fail producing. Production failures within the threshold will be logged, failures exceeding the threshold cause the application to stop processing further records, and shutting down.
Required | true |
---|---|
Type | duration |
Default | PT30M |
ENV | KAFKA_STREAMS_EXCEPTION_THRESHOLDS_PRODUCTION_INTERVAL |
kafka-streams.num.stream.threads
The number of threads to allocate for stream processing tasks. Note that Specifying a number higher than the number of input partitions provides no additional benefit, as excess threads will simply run idle.
Refer to https://kafka.apache.org/documentation/#streamsconfigs_num.stream.threads for details.
Required | true |
---|---|
Type | integer |
Default | 3 |
ENV | KAFKA_STREAMS_NUM_STREAM_THREADS |
kafka.bootstrap.servers
Comma-separated list of brokers to use for establishing the initial connection to the Kafka cluster.
Refer to https://kafka.apache.org/documentation/#consumerconfigs_bootstrap.servers for details.
Required | true |
---|---|
Type | string |
Default | null |
Example | broker-01.acme.com:9092,broker-02.acme.com:9092 |
ENV | KAFKA_BOOTSTRAP_SERVERS |
kafka.max.request.size
Defines the maximum size of a Kafka producer request in bytes.
Some messages like Bill of Vulnerabilities can be bigger than the default 1MiB. Since the size check is performed before records are compressed, this value may need to be increased even though the compressed value is much smaller. The Kafka default of 1MiB has been raised to 2MiB.
Refer to https://kafka.apache.org/documentation/#producerconfigs_max.request.size for details.
Required | true |
---|---|
Type | integer |
Default | 2097152 |
ENV | KAFKA_MAX_REQUEST_SIZE |
kafka.topic.prefix
Defines an optional prefix to assume for all Kafka topics the application consumes from, or produces to. The prefix will also be prepended to the application's consumer group ID.
Required | false |
---|---|
Type | string |
Default | null |
Example | acme- |
ENV | KAFKA_TOPIC_PREFIX |
quarkus.kafka-streams.application-id
Defines the ID to uniquely identify this application in the Kafka cluster.
Refer to https://kafka.apache.org/documentation/#streamsconfigs_application.id for details.
Required | false |
---|---|
Type | string |
Default | ${kafka.topic.prefix}hyades-mirror-service |
ENV | QUARKUS_KAFKA_STREAMS_APPLICATION_ID |
Observability
quarkus.log.console.json
Defines whether logs should be written in JSON format.
Required | false |
---|---|
Type | boolean |
Default | false |
ENV | QUARKUS_LOG_CONSOLE_JSON |