Re: [DISCUSS] Flink backward compatibility
I think this is a very good discussion to have.
Flink is becoming part of more and more production deployments and more
tools are built around it.
The question is do we want to (or can we) make parts of the
control/maintenance/monitoring API stable such that external
systems/frameworks can rely on them as stable.
Which APIs are relevant?
Which APIs could be declared as stable?
Which parts are still evolving?
Am Di., 27. Nov. 2018 um 15:10 Uhr schrieb Chesnay Schepler <
> I think this discussion needs specific examples as to what should be
> possible as it otherwise is to vague / open to interpretation.
> For example, "job submission" may refer to CLI invocations continuing to
> work (i.e. CLI arguments), or being able to use a 1.6 client against a
> 1.7 cluster, which are entirely different things.
> What does "management" include? Dependencies? Set of jars that are
> released on maven? Set of jars bundled with flink-dist?
> On 26.11.2018 17:24, Thomas Weise wrote:
> > Hi,
> > I wanted to bring back the topic of backward compatibility with respect
> > all/most of the user facing aspects of Flink. Please note that isn't
> > limited to the programming API, but also includes job submission and
> > management.
> > As can be seen in , changes in these areas cause difficulties
> > downstream. Projects have to choose between Flink versions and users are
> > ultimately at disadvantage, either by not being able to use the desired
> > dependency or facing forced upgrades to their infrastructure.
> > IMO the preferred solution would be that downstream projects can build
> > against a minimum version of Flink and expect compatibility with future
> > releases of the major version stream. For example, my project depends on
> > 1.6.x and can expect to run without recompilation on 1.7.x and later.
> > How far away is Flink from stabilizing the surface that affects typical
> > users?
> > Thanks,
> > Thomas
> >  https://issues.apache.org/jira/browse/BEAM-5419