What is the translation of " FLINK " in English?

Examples of using Flink in Chinese and their translations into English

{-}
  • Political category close
  • Ecclesiastic category close
  • Programming category close
头或更多的牛被称为Flink
Twelve or more cows is called a flink.
Flink的核心是一个事件流数据流引擎。
The core of Flink is a streaming iterative data flow engine.
头或更多的牛被称为Flink
Twelve or more cows are known as a flink.
Flink通过TaskSlots来定义执行资源。
Execution resources in Flink are defined through Task Slots.
但是我想这些人都知道Flink是什么。
I think everyone knows what a Kindle is.
第一届FlinkForward于2015年在柏林举行。
The first edition of Flink Forward took place in 2015 in Berlin.
头或更多的牛被称为Flink
A group of twelve or more cows is called a flink.
可以与Flink、Spark和其他云数据流系统集成.
Can be integrated with Flink, Spark and other cloud dataflow systems.
Flink1.7.0是第一个完全支持Scala2.12的版本。
Apache Flink 1.7.0 is the first version to fully support Scala 2.12.
Alink是在统一的分布式计算机Flink的基础上开发的。
Alink was developed based on Flink, a unified distributed computing engine.
重要改进包括Samsara数学环境以及支持使用Flink作为后端。
Key enhancements include the Samsara math environment and support for Flink as a back end.
Alink是在统一的分布式计算机Flink的基础上开发的。
Alink was developed on the basis of Flink, a unified distributed computing engine.
Flink还有些一些额外的连接器通过ApacheBahir发布,包括:.
Additional streaming connectors for Flink are being released through Apache Bahir, including:.
如果启用了本地恢复,Flink将在运行任务的计算机上保留最新检查点的本地副本。
If local recovery is enabled, Flink will keep a local copy of the latest checkpoint on the computer running the task.
可用于流和批处理分析的混合引擎包括ApacheApex、ApacheSpark和ApacheFlink
Hybrid engines that can be used for either stream or batch analytics include Apache Apex, Apache Spark,and Apache Flink.
ApacheFlink1.7.0继续添加更多的连接器,使其更容易与更多外部系统进行交互。
Kafka 2.0 Connector Apache Flink 1.7.0 continues to add more connectors(Connectors), making it easier to interact with more external systems.
Facebook的许可证影响了许多重要的开源项目,包括Samza,Flink,Marmotta,Kafka和Bahir。
A number of important open source projects have been impacted by Facebook's license,including Samza, Flink, Marmotta, Kafka and Bahir.
Flink和Spark都提供了BeamAPI之外的专有扩展,而Spark创建者对支持Beam并不是很感兴趣。
Both Flink and Spark have proprietary extensions beyond the Beam API, and Spark creators are not very interested in supporting Beam.
除了在单一的机器上运行,它还支持分布式框架ApacheHadoop,ApacheSpark,ApacheFlink
Other than running on a single machine, it also supports the distributed processing frameworks Apache Hadoop, Apache Spark,and Apache Flink.
Flink程序的基本构建块是streams和transformations(注意,DataSet在内部也是一个stream)。
The basic building blocks of Flink programs are streams and transformations(note that a DataSet is internally also a stream).
除了在一台机器上运行,它还支持分布式处理框架ApacheHadoop,ApacheSpark和ApacheFlink
Other than running on a single machine, it also supports the distributed processing frameworksApache Hadoop, Spark Apache,and Apache Flink.
某些常见的工具和框架还包括内存关系数据库,如VoltDB,Spark,Storm,Flink,Kafka和某些NoSQL数据库。
Some of the common tools and frameworks include in-memory relational databases like VoltDB, Spark,Storm, Flink, Kafka, and NoSQL databases.
MRQL是一个用于大规模分布式数据分析的查询处理和优化系统,构建于ApacheHadoop,Hama,Spark和Flink之上。
MRQL is a query processing and optimization system for large-scale, distributed data analysis, built on top of Apache Hadoop, Hama,Spark, and Flink.
如果发生机器或软件故障,重新启动后,Flink应用程序将从最新的checkpoint点恢复处理;.
In the event of a machine or software failure and upon restart, a Flink application resumes processing from the most recent successfully-completed checkpoint;
同样,Flink的最终用户倾向于运行基于Flink的应用程序而不是Flink作业,这是Flink中可用的最高执行抽象。
Similarly, end users of Flink tend to run Flink-based applications rather than Flink jobs, which is the highest execution abstraction available in the project.
ApacheBeam是一个开源项目,提供了一组统一的API,用于跨执行引擎移植处理管道,包括Samza、Spark和Flink
Apache Beam is an open-source project that provides a unified API, allowing pipelines to be ported across execution engines including Samza,Spark, or Flink.
将检查点数据写入持久存储是异步发生的,这意味着Flink应用程序在写检查点过程中可以继续处理数据。
Writing the checkpoint data to the persistent storage happens asynchronously,which means that a Flink application continues to process data during the checkpointing process.
ApacheFlink(以前称为Stratosphere)在Java和Scala中具有强大的编程抽象,高性能运行时和自动程序优化。
Apache Flink(formerly called Stratosphere) features powerful programming abstractions in Java and Scala, a high-performance runtime, and automatic program optimization.
DataArtisans应用程序工程总监JamieGrier最近在OSCON2016Conference大会发言谈到了使用ApacheFlink构建的一种数据流体系结构。
Jamie Grier, Director of applications engineering at data Artisans recently spoke at OSCON 2016 Conference aboutdata streaming architecture using Apache Flink.
然后,pipeline由Beam支持的分布式处理后端之一执行,包括ApacheApex,ApacheFlink,ApacheSpark和GoogleCloudDataflow。
The pipeline is then executed by one of Beam's supported distributed processing back-ends,which include Apache Flink, Apache Spark, and Google Cloud Dataflow.
Results: 145, Time: 0.0235

Top dictionary queries

Chinese - English