memory - Kafka stream queue - configure consumer property queuedchunks.max -
consumer's property queuedchunks.max meaning bit shadowy.
i see, each stream in consumer has queue capacity defined (queuedchunks.max).
1) but, if consumer starts consume more topics (within single stream), affect maximum size of objects, queue able hold.
for example: if set fetchsize = 1000, , queuedchunks.max = 10, mean, no matter how many topics consume, queue in memory never greater 1000*10?
2) queue - effective method collect messages in async way consumer before flashing them on disk? disk io slow, collect messages in queue better try write them on disk?
3) how messages ordered in queue if consume n topics?
each queue node(entry) keeps messages of 1(single) topic:
[t1], [t2], [t3], [t1], [t2]..?
or possible, each node keeps messages of different topics?
[t1, t2], [t3, t1, t2]..?
4) node(entry) of fetchsize maximum size?
5) possible set fetchsize per topic? or id consumer property only?
thank you.
1 - no, consumer stream size not change number of topics, streams number remain equal defined on start of consumer.
2 - yes
3 - each "message" fetched chunk of defined max size (fetchsize), may contain messages of single partition of single topic.
so looks like: [t1-p1][t1-p2][t2-p1]...[tn-pk]
there no order between partitions, messages of 1 partition come in order
it may reach fetchsize size, may not.
no
Comments
Post a Comment