We need two clarifications for using Flink 1.6.0. We have flink jobs running to handle 100's of tenants with sliding window of 24hrs and slide by 5 minutes.
1) If checkpointing is enabled and flink job crashes in the middle of spitting out results to kafka producer. Then if the job resumes from the previously known checkpoint, then what happens to the partial results that kafka producer has already sent out. Do we end up sending duplicates for the same window after recovery from checkpoint. Or this window never gets triggered again as we have forward in time ?
2) If one input event flink, produces 100 messages out in kafka producer after window expiry, then how will setFlushOnCheckpoint will work in kafka producer. I am confused how will it ensure that all records before checkpoint have been sent out because we are creating 100 output events from single input event.
Thanks for the help.