Hello, everyone. In order to help you better find suitable jobs and get offers from your favorite companies, the community has collected the recruitment needs of many real-time computing and Flink enterprise users, such as: Recruitment information of Alibaba, Bytedance, Kuaishou, netease Cloud Music, Good future, Qihoo 360, Meituan, Xiaohongbook, iQiyi, JINGdong and other recruitment information is sorted out below for your reference.
Welcome more companies to post real-time computing and Flink related recruitment information. For relevant requirements, please contact Xiaosquirrel (wechat ID: Ververica2019).
The recruitment companies and positions are as follows. If you are looking for a job or want to change your job, come to the bowl!
Note: The following order is in accordance with the time when the recruitment information was updated. Click the catalog in the lower right corner to locate the desired company
List of companies and positions
Ali Cloud Intelligent Business Group | Big Data Computing platform R&D expert (real-time computing) Bytedance infrastructure team | Streaming computing R&D engineer Kuaishou | Real-time computing engine R&D engineer/expert & Distributed computing engine R&D engineer/expert netease Cloud Music | Data platform development engineer Good Future | Senior Development Engineer of Data Platform Qihoo 360 | Big Data basic Components R&D engineer Meituan | Big data computing engine R&D expert XiaoHongshu | Flink Data Development IQiyi | Senior R&D Engineer of Big Data Storage Service & Senior R&D Engineer of Big Data OLAP Service & Senior R&D Engineer of Big Data Real-time Computing Service JINGdong | Algorithm Architecture Engineer (real-time and offline direction)Copy the code
Ali Cloud Intelligent Business Group: Big data computing platform R&D expert (real-time computing)
Basic Information:
Working years: more than 3 years
Department: Alibaba Group
Education: Master
responsibility
1. Build alibaba real-time computing platform based on Apache Flink on Haddop/Kubernetes ecology to serve all real-time data analysis businesses of Alibaba Group; 2. Build the world's top Apache Flink enterprise version Software Ververica Platform to provide the most convenient real-time computing cloud products and services worldwide; 3. Participated in the construction of a number of national strategic projects such as urban brain and intelligent transportation, and utilized real-time computing technology to process massive time-sensitive data in the real world.Copy the code
Post requirements
1. Have a solid theoretical foundation of computer, have a strong foundation of data structure and algorithm; 2. Proficient in Java language programming, with excellent system Debug/Profiling ability and experience; 3. Familiar with common object-oriented design patterns, excellent system architecture design ability; 4. Familiar with open source big data and container technologies such as Flink/Spark/Hadoop/K8S, active in open source community is preferred; 5. Be familiar with Spring/MyBatis/presents/React and other related services and web technology is preferred; 6. Optimization of large-scale Hadoop/K8S production cluster management practitioner.Copy the code
We encourage everyone to practice public welfare. If you have participated in public welfare activities, you are also welcome to attach relevant proof to your resume. References include but are not limited to: volunteer service certificate issued by the National Volunteer Service Information System, public service certificate issued by the “3 Hours for Everyone” public welfare platform, volunteer service certificate granted by volunteer service organizations (including social groups, social service organizations and foundations), etc.
A resume
-
Working city: Hangzhou
-
Resume: talent.alibaba.com/off-campus-…
-
Working cities: Beijing, Hangzhou, Shanghai
-
Resume: talent.alibaba.com/off-campus-…
Bytedance Infrastructure team: Stream computing R&D engineer
Team Introduction:
The streaming computing team is responsible for the company’s internal streaming computing application scenarios, supporting AML/ recommendation/data warehousing/search/advertising/streaming media/security and risk control and many other core businesses. At present, streaming computing is mainly based on Flink computing engine, which faces the challenges of super-large single operation (tens of millions of QPS), super-large cluster scale (tens of thousands of machines), and deep optimization in SQL, State&Checkpoint, and Runtime.
responsibility
1. Build an efficient, real-time and stable streaming computing engine to support the recommendation and advertising business of multiple product lines within Bytedance; 2. Build a high-performance and easy-to-use SQL engine to support SQL syntax in special scenarios and optimize the performance of Streaming SQL tasks; 3. Build a unified batch computing engine based on SQL to support the real-time/offline unified application scenarios of some core services; 4. Explore streaming computing in emerging hardware, real-time data warehouse, machine learning, quasi-real-time interactive query and other technical solutions.Copy the code
Post requirements
1. Consider yourself a technology Geek with strong problem-solving skills; 2. Proficient in one or more programming languages such as Java/C++/Go; 3. Have a solid theoretical foundation of computer, have a strong foundation of data structure and algorithm; 4. Have the principle of parallel or distributed computing, familiar with high concurrency, high stability, linear expansion, mass data system characteristics and technical solutions; 5. To open source computing framework Flink/Calcite/Storm/Kafka/Yarn/Hive/Spark/Kubernetes has one or more of the in-depth research and related experience is preferred; 6. In-depth research and experience in real-time computing, offline computing, OLAP, machine learning is preferred.Copy the code
A resume
-
Working city: Beijing/Hangzhou
-
Resume: job.toutiao.com/referral/mo…
Kuaishou: Real-time Computing Engine R&D Engineer/Expert [Data Architecture] & Distributed Computing Engine R&D Engineer/expert [Data Architecture]
Real-time Computing Engine R&D Engineer/Expert [Data Architecture]
responsibility
1. Participated in the development and optimization of The Kuaishou Real-time computing engine (Flink) system, built the streaming batch fusion data engine, supported the company's rich streaming batch computing needs and major activities, and solved the actual business requirements and performance problems; 2. Accept the challenge of big data platform system design and implementation complexity, analyze and discover the optimization points of the system, and be responsible for promoting the rationality, reliability and availability of the system; 3. Keep communication with the open source community, introduce features and systems from the community that are helpful to the company's business scenarios, or contribute internal development functions to the community.Copy the code
Post requirements
1. Bachelor degree or above in computer science or related field; 2. Familiar with mainstream distributed computing engines (at least one), close reading of Flink source code is a plus, experience in secondary development of open source systems or code developed by open source community is a plus; 3. Excellent design and coding skills, high engineering quality requirements, able to quickly design and implement solutions to business needs and problems; 4. Active thinking, strong ability to analyze and solve problems, strong sense of responsibility, passion for work, good communication skills.Copy the code
A resume
- Working city: Beijing
- Resume: [email protected]
Distributed Computing Engine R&D Engineer/Expert [Data Architecture]
responsibility
1. Participated in the r&d and optimization of distributed computing engine related systems of Kuaishou EB-level big data platform, and solved actual business requirements and performance problems. Subsystems including but not limited to FLINK/HIVE/SPARK/PRESTO/DRUID/CLICKHOUSE, etc.; 2. Accept the challenge of big data platform system design and implementation complexity, analyze and discover the optimization points of the system, and be responsible for promoting the rationality, reliability and availability of the system; 3. Keep communication with the open source community, introduce features and systems from the community that are helpful to the company's business scenarios, or contribute internal development functions to the community.Copy the code
Post requirements
1. Bachelor degree or above in computer science or related field; 2. Familiar with mainstream distributed computing engines (at least one), close reading of source code is a plus, secondary development experience of open source system or code developed by open source community is a plus; 3. Excellent design and coding skills, high engineering quality requirements, able to quickly design and implement solutions to business needs and problems; 4. Active thinking, strong ability to analyze and solve problems, strong sense of responsibility, passion for work, good communication skills.Copy the code
A resume
- Working city: Beijing
- Resume: [email protected]
Netease Cloud Music: Data platform development engineer
responsibility
1. Responsible for the development, operation and maintenance of cloud music big data platform and tools: real-time, offline, machine learning-related, to improve the work efficiency of relevant development; 2. Responsible for the design and development of batch streaming integrated development tools, supporting cloud music algorithms and statistical requirements, and improving the real-time performance of overall music data; 3. Ensure the stability and accuracy of the overall cloud music data link and improve the maintainability of the overall data link; 4. Optimize the overall use of big data platform resources, improve resource utilization, and reduce the overall use cost; 5. Investigate and implement the latest technologies related to current big data, continuously expand the platform's big data capabilities, and optimize the overall service performance and efficiency.Copy the code
Post requirements
1. At least 3 years working experience, bachelor degree or above in computer science or related field; 2. Familiar with Flink/Spark/Kafka/Zeppelin/Livy big data related to components, be able to design, optimization and management of these schemes, a petabytes of data real-time and off-line data processing task development and operational experience; 3. Familiar with distributed system architecture, experience in large-scale distributed system architecture; 4. Good communication skills and teamwork skills, diligent and able to work under pressure of periodic projects; 5. Basic JAVA/ Python development skills, good business understanding, machine learning experience or contributed source code to the open source community is preferred.Copy the code
A resume
-
Working city: Hangzhou
-
Resume: [email protected]
Good future: Senior Development Engineer of data Platform
Good Future is an educational technology company, a public company in the United States. It has learn&think excellent, learn&think network school, aizhikang 1V1 and other teaching brands. The current position is in the data center of the group headquarters, supporting the data analysis work of the whole group. The office is located in Beijing Zhongguancun Kemao Electronic Building.
responsibility
1. Design real-time analysis platform, algorithm platform, minute progression warehouse and other platform systems; 2. Developed real-time platform, algorithm platform and modules related to minute progression warehouse, and put them into online operation; 3. Periodically share technology precipitation and assist new members to get familiar with and complete relevant work.Copy the code
Post requirements
1. Bachelor degree or above (important), 3-5 years Java development experience; 2. Familiar with JavaWeb development, able to build back-end system independently; 3. Familiar with Flink, Kafka, Hbase and other real-time development related technology stack; 4. Familiar with distributed service development technology, knowledge of K8S, message queue, Redis, SEC kill, scheduling system principle is preferred; 5. Experience in large data and high concurrency project development, real-time platform back-end function development, algorithm platform back-end function development is preferred.Copy the code
Challenge and Growth
-
Build an algorithm platform from scratch to build a one-stop algorithm platform for the whole process, realize feature services in high concurrency scenarios, and build flexible scalable model deployment services based on K8S.
-
Build a minute level warehouse from scratch to support real-time enterprise data analysis, and realize one-stop integration of massive data, ETL processing and data millisecond response query;
-
Upgrade iterative data development platform to realize efficient development of offline data, real-time data and algorithmic data;
-
Nice team atmosphere, technology stack including JavaWeb and big data, support internal transfer to big data development positions.
A resume
-
Working city: Beijing
-
Resume: [email protected]
Qihoo 360: R&d engineer of big data basic components
Team Introduction:
We are responsible for the research, construction, promotion and application of qihoo 360’s internal big data infrastructure. Driven by deep technical research and based on business value, we continue to provide strong impetus for the development of the company’s business by improving the availability, ease of use and stability of big data-related solutions. Currently, comes at a time when big data technology came to a new stage of development, in the cloud, intelligent, lake for the integration of the characteristics of a new generation of big data solution has been basically forming, but also need to be further optimized and scene, we are looking for also interested in the technology of you to join, to work together for China’s network security construction.
responsibility
1. Participate in Flink/Spark/Yarn/Presto/Druid/Doris big data infrastructure components such as secondary development; 2. Participate in the r&d and implementation of new generation big data technologies such as cloud native, data lake and cloud warehouse; 3. Participate in the continuous improvement of the company's big data system, and continuously improve the stability and performance of the platform architecture.Copy the code
Post requirements
1. Bachelor degree or above in 985/211, major in computer, communication, mathematics or related field, with good basic knowledge of computer technology; 2. Familiar with Java, have solid data structure and algorithm foundation; 3. Strong sense of responsibility, strong team communication and collaboration skills; 4. Understand Flink/Spark/Yarn/Presto/Druid/Doris one or more of the principle is preferred; 5. Experience in PB-level data processing and component development is preferred.Copy the code
A resume
-
Working cities: Beijing, Xi ‘an
-
Resume: [email protected]
Meituan: Big data computing engine development expert
responsibility
1. Responsible for building Meituan distributed offline/real-time computing platform, supporting business requirements of multiple scenarios (such as warehouse production, event processing, machine learning, etc.); 2. Solve the problem of high availability in flow computing scenarios. Through self-developed Flink HA architecture, support the stable operation of tens of thousands of Flink jobs and ensure 99.99% availability of real-time data of TB status offline; 3. Solve the engine expansibility problem caused by the business scale growth in the scenario of warehouse production. Support Spark Job of millions per day, EB-level data processing and 100 TB Shuffle in a single Job through self-developed Remote Shuffle Service architecture and engine kernel transformation. Continuously improve the stability and expansibility of the production engine to ensure the stable output of core data; 4. Solve the timeliness problem of T+1 calculation in the scenario of warehouse data, and support the near-real-time readiness of complex calculation with hundreds of billions of data by exploring incremental computing architecture; 5. Solve the calculation caliber problem of stream and batch architectures, promote the integration of stream and batch computing engines, promote the unification of semantic layer, computing layer and storage layer, and promote the integration of stream and batch; 6. Comprehensive scheduling, engine layer kernel transformation and optimization and other technical solutions continue to improve computing efficiency and reduce computing costs.Copy the code
Post requirements
1. Good basic knowledge of computer, interested in big data computing; 2. Practical application experience and principle understanding of mainstream big data computing engines (including but not limited to Flink, Spark, Mapreduce, Hive, Storm), experience in engine optimization or platform; 3. Have a deep understanding of background development technology stack, with high concurrency and high availability service experience; 4. Strong technical curiosity and self-drive, knowledge of industry best practices.Copy the code
Bonus points:
- Participated in large open source projects, especially contributed community code on engines such as Flink/Spark and Kafka;
- Complete participation in the enterprise series warehouse construction, or several warehouse platform experience;
- Experience in storage system read/write optimization;
- Can be result-oriented and value-oriented, using scientific methods for index collection, statistics, analysis, so as to measure results.
A resume
-
Working city: Beijing
-
Resume: [email protected]
Little Red Book: Flink data development
responsibility
1. Participated in the development and design of xiaohongshu distributed real-time computing engine, meeting the processing requirements of millisecond delay and millions of throughput; 2. Participate in the development and design of real-time computing management platform, provide unified real-time application development and management platform and services for the company, improve the efficiency of application development and reduce operation and maintenance costs; 3. Participated in the architecture design of the company's core real-time business system, including real-time recommendation/real-time report/real-time data exchange and other core businesses.Copy the code
Post requirements
1. Familiar with distributed processing engines such as Flink/Spark, and familiar with message middleware such as Kafka/RocketMQ; 2. Proficient in Java/Scala programming languages, data structures and algorithms; 3. Passion for technology and use technology and teamwork to solve business challenges.Copy the code
A resume
-
Working city: Shanghai
-
Resume: [email protected]
Iqiyi: Senior R&D engineer of Big Data storage Service & Senior R&D engineer of Big Data OLAP Service & Senior R&D engineer of Big Data real-time computing Service
Senior R&D Engineer of Big Data Storage Services
responsibility
1. Participate in the construction of iQiyi Data Lake, create unified big data storage, and realize intelligent hot and cold classification mechanism, unified routing, cross-cluster HA and other functions; 2. Participate in the development of big data intelligent operation and maintenance platform, and promote the automation, platformization and intelligence of operation and maintenance; 3. Be responsible for in-depth research and optimization of HDFS, HBase, Alluxio and other services, solve problems and challenges in large-scale distributed scenarios, ensure service stability and efficiency, and promote the iterative upgrade of service architecture.Copy the code
Post requirements
1. Bachelor degree or above in computer science or related field, more than 3 years working experience; 2. Familiar with the principles of big data storage services, such as HDFS, HBase, and Alluxio, and have practical experience in large-scale cluster; 3. Have SRE system development ability and large-scale cluster automation operation and maintenance experience, have engineering problem solving thinking, operation and maintenance platform development experience is preferred; 4. Good command of Java, good development experience and trouble shooting ability; 5. Solid basic computer knowledge of operating system, network and hardware, rich experience in troubleshooting and tuning; 6. Keen on open source technology, contribution to open source community is preferred, please explain in your resume; 7. Responsible, practical, good team work spirit and communication skills.Copy the code
Senior R&D Engineer for Big Data OLAP services
responsibility
1. Participate in the construction of IQiyi big data OLAP service system, and be responsible for in-depth research and optimization of Hive, Kylin, Presto, Druid, Iceberg and other services to ensure the service stability and query efficiency under the scenario of mass data analysis; 2. Participated in the development of intelligent SQL engine, realized intelligent routing, query degradation, fusing, SQL audit, abnormal diagnosis and other functions of multiple OLAP engines, reduced data analysis threshold and improved query efficiency; 3. Participate in the upgrading and transformation of architecture such as integration of lake and warehouse and real-time analysis, and promote real-time data; 4. Continue to follow up the cutting-edge technology in the database field and promote the evolution of OLAP service system.Copy the code
Post requirements
1. Major in computer science, bachelor degree or above, more than 2 years working experience; 2. Master database knowledge, have practical experience in big data OLAP service under massive data scenarios, and have in-depth research on one of the following services: Hive, Impala, Kudu, Kylin, Druid, Presto, Hudi/Iceberg, etc.; 3. Familiar with OLAP ecology, able to distinguish advantages and disadvantages of different services and use scenarios, and able to select database according to business requirements; 4. Good command of Java, good development experience and trouble shooting ability; 5. Solid basic knowledge of operating system, network, hardware and other computer, rich experience in troubleshooting; 6. Experience in large-scale cluster automation operation and maintenance, familiar with Ansible language preferred; 7. Keen on open source technology, contribution to open source community is preferred, please explain in your resume; 8. Responsible, practical, good team work spirit and communication skills.Copy the code
Senior R&D engineer of Big Data real-time computing Services
responsibility
1. Participate in the construction of iQiyi big data real-time computing service system, support the development of real-time computing platform based on Spark/Flink/Kafka, provide one-stop real-time data integration, processing, analysis and other services, simplify data use and improve the efficiency of data development and analysis; 2. Responsible for in-depth research and optimization of Spark, Flink, Kafka and other services to ensure the stability of real-time computing; 3. Participated in the architecture optimization of real-time business, promoted the implementation of schemes of integration of flow and batch, real-time data warehouse and data lake, and supported real-time applications.Copy the code
Post requirements
1. Major in computer science, bachelor degree or above, more than 2 years working experience; 2. Experience in the field of big data technology, familiar with Flink, Spark, Kafka and other open source services, in-depth research on at least one of them, experience in big data tool development is preferred; 3. Have a solid Java foundation, proficient in Java and Spring Boot development, and have good experience in Java project development; 4. Have solid basic knowledge of operating system, data structure and algorithm, and have strong learning ability; 5. Keen on open source technology, contribution to open source community is preferred, please explain in your resume; 6. Responsible, practical, good team work spirit and communication skills.Copy the code
A resume
-
Working city: Shanghai
-
Resume: [email protected]
Email subject: Position name −{position name}- Position name −{name}
Resume file format: Position name −{position name}- Position name −{name}
Jingdong: Algorithm Architecture Engineer (real-time, offline direction)
responsibility
1. Build an efficient and stable framework for large-scale data batch and streaming computing engine; 2. Explore online machine learning, graph computing, quasi-real-time interactive query and other technical solutions; 3. Build a complete machine learning model system consisting of sample processing, feature service, model training and model prediction.Copy the code
Post requirements
1. Strong structural thinking and problem solving ability; 2. Proficient in Java, Python, C++ and other programming languages; 3. Have a solid theoretical foundation of computer, have a strong foundation of data structure and algorithm; 4. Have the principle of parallel or distributed computing, familiar with high concurrency, high stability, linear expansion, mass data system characteristics and technical solutions; 5. To open source computing framework Flink/Spark/Storm/Kafka/Hive/Hadoop/Yarn has one or more of the in-depth research and related experience is preferred; 6. In-depth research and experience in machine learning, graph computing, OLAP is preferred.Copy the code
A resume
- Working city: Beijing
- Resume: [email protected]