Re: [DISCUSS] Support Higher-order functions in Flink sql
This is a meaningful direction to improve the functionality for Flink SQL.
As Xuefu suggested, you can come up with a design doc covering the
functions you'd like to support and the improvements.
IMO, the main obstacle might be the syntax for the lambda function which is
not supported in Calcite currently, such as: "TRANSFORM(arrays, element ->
element + 1)". In order to support this syntax,
we might need to discuss it in Calcite community. It is not like DDL
parser, the DDL parser is easy to extend in a plugin way which is Calcite
It would be great if you can share more thoughts or works on this.
On Mon, 3 Dec 2018 at 17:20, Zhang, Xuefu <xuefu.z@xxxxxxxxxxxxxxx> wrote:
> Hi Wenhui,
> Thanks for bringing the topics up. Both make sense to me. For higher-order
> functions, I'd suggest you come up with a list of things you'd like to add.
> Overall, Flink SQL is weak in handling complex types. Ideally we should
> have a doc covering the gaps and provide a roadmap for enhancement. It
> would be great if you can broaden the topic a bit.
> Sender:winifred.wenhui.tang@xxxxxxxxx <winifred.wenhui.tang@xxxxxxxxx>
> Sent at:2018 Dec 3 (Mon) 16:13
> Recipient:dev <dev@xxxxxxxxxxxxxxxx>
> Subject:[DISCUSS] Support Higher-order functions in Flink sql
> Hello all，
> Spark 2.4.0 was released last month. I noticed that Spark 2.4
> “Add a lot of new built-in functions, including higher-order functions, to
> deal with complex data types easier.”
> I wonder if it's necessary for Flink to add higher-order functions to
> enhance it's ability.
> By the way, I found that if we wants to enhance the functionality of Flink
> sql, we often need to modify Calcite. It may be a little inconvenient，so
> may be we can extend Calcite core parser in Flink to deal with some
> non-standard SQL syntax, as mentioned in Flink SQL DDL Design.
> Look forward to your feedback.
> Wen-hui Tang
>  https://issues.apache.org/jira/browse/SPARK-23899
> Winifred-wenhui Tang