Aug-25-2021, 03:21 PM
Are you passionate about Big Data? Do you deal with data set in the Size of Peta Bytes (PB) with large scale Hadoop cluster? In this role we are looking for experienced Big Data engineer to work as part of Knowledgebase Team. You would be working on developing big data solutions for array of structured and unstructured data sets and deployed on cloud. The ideal candidate will be a seasoned hands-on Big data developer who enjoys fast-paced environment, is a self-starter and have excellent track record of delivering enterprise scale big data solutions.
We are seeking a, senior level, big data engineer to help us reimagine and modernize the back-end processing factory that fuels the data pipeline for the Black Duck Hub. As a technical star, you will collaborate with other senior members of the team to propose designs, define epics, groom backlog stories, and develop microservices that handle big data. Your hands-on background with Scala, Apache Spark, and Kafka will help us modernize existing batch processes to new, efficient, maintainable, microservices. As a quality minded person, you will be responsible for authoring repeatable automated tests for your solutions and integrating them with CI. You will become a domain professional in open source, vulnerability detection, and license compliance.
You will be responsible for:
Development of mid-sized and large projects, data pipelines and streaming data processing
Develop new Features, algorithms, and data manipulation capabilities
Develop next generation data processing engine using batching or streaming mechanism
Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, code comments
Delivering incremental features in agile SDLC
Debug existing source code and enhance current feature sets.
Apply mathematics and statistics to problem-solving skills
You will need:
Bachelor’s degree (BSCS or equivalent)
3+ years of big data engineering experience
3+ years of general programming experience (Java, Python, Scala)
3+ years of experience with Relational Database & Big Data: SQL, Hadoop, Big Table, Big Query
Cloud Experience with GCP or AWS
Exposure to Apache beam, Airflow, Postgres SQL
Understanding of software design patterns and Excellent aptitudes in data-structures and algorithms
Experience working with extremely large datasets and data processing at significant scale
Experience building and traversing graphs
Skill sets:
Python, Kafka, Postgres, Solr, Avro, Machine Learning, Hadoop (Hortonworks), Google Cloud
Read more / apply: infosec-jobs.com/job/5123-senior-software-engineer-id-30598
We are seeking a, senior level, big data engineer to help us reimagine and modernize the back-end processing factory that fuels the data pipeline for the Black Duck Hub. As a technical star, you will collaborate with other senior members of the team to propose designs, define epics, groom backlog stories, and develop microservices that handle big data. Your hands-on background with Scala, Apache Spark, and Kafka will help us modernize existing batch processes to new, efficient, maintainable, microservices. As a quality minded person, you will be responsible for authoring repeatable automated tests for your solutions and integrating them with CI. You will become a domain professional in open source, vulnerability detection, and license compliance.
You will be responsible for:
Development of mid-sized and large projects, data pipelines and streaming data processing
Develop new Features, algorithms, and data manipulation capabilities
Develop next generation data processing engine using batching or streaming mechanism
Documents and demonstrates solutions by developing documentation, flowcharts, layouts, diagrams, code comments
Delivering incremental features in agile SDLC
Debug existing source code and enhance current feature sets.
Apply mathematics and statistics to problem-solving skills
You will need:
Bachelor’s degree (BSCS or equivalent)
3+ years of big data engineering experience
3+ years of general programming experience (Java, Python, Scala)
3+ years of experience with Relational Database & Big Data: SQL, Hadoop, Big Table, Big Query
Cloud Experience with GCP or AWS
Exposure to Apache beam, Airflow, Postgres SQL
Understanding of software design patterns and Excellent aptitudes in data-structures and algorithms
Experience working with extremely large datasets and data processing at significant scale
Experience building and traversing graphs
Skill sets:
Python, Kafka, Postgres, Solr, Avro, Machine Learning, Hadoop (Hortonworks), Google Cloud
Read more / apply: infosec-jobs.com/job/5123-senior-software-engineer-id-30598