Contract – Cloud Data Engineer – hourly rate open based on qualifications

  • Location: Columbus
  • Job Type: Contract

Contract – Cloud Data Engineer

Start Date: ASAP

Duration: 6 months

Must Haves: Experience with Apache Beam and Google Cloud

 

We are looking for a talented individual to help evolve computing landscapes, enabling our business to realize the full advantage of opportunities as they arise.  The Contract – Cloud Data Engineer role requires a willingness to engage in the design, development and implementation of varying initiatives to support the overall enterprise portfolio.  We are aggressively expanding our cloud footprint and we need hands-on engineers to support this growth.  You will be responsible for working with cross functional teams to identify additional data needs, guide the design approach and develop best practices for consuming data from disparate sources.  You will contribute to a team  that has been challenged to leverage advancing technologies to drive our IT organization to support the growth of a high performing enterprise.

 

As a Cloud Data Engineer you will be regularly presented with new tasks and challenges from both or business partners and IT colleagues.  You will be expected to dig into challenging data problems and propose solutions that take advantage of the Google Cloud Platform and its corresponding tools.  The position will require a willingness to engage with our business partners as well as external strategic partners to deliver forward thinking solutions that are enterprise ready. Reports to the Sr. Manager of Data Architecture and Design.

 

Responsibilities:

  • Design and build data pipelines for handling both real-time data streams and batch based integrations.
  • Leverage Apache Spark based tooling available within Google Cloud Platform to develop data pipelines.
  • Support the development of machine learning models by making data available for analysis and exposing the output of the models to other systems
  • Analyze large and complex datasets to identify data quality issues early in the SDLC.
  • Work with cross functional stakeholders to define their data needs and propose best of breed solutions.

 

Qualifications:

  • Bachelor’s degree or higher in computer science, information systems and related field.
  • Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus.
  • 7+ years working in a data engineering role.
  • 3+ years of hands on experience building and optimizing big data pipelines using Apache Beam/Spark.
  • 2+ years of hands on experience building and integrating data pipelines with machine learning models.
  • 1+ years of hands on experience with Google Cloud Platform (BigQuery, Data Flow/Data Prep, Cloud Functions)
  • Prior data quality analysis and remediation experience a plus.
  • Prior experience working with varying big data tools/environments a plus (Kafka, Cassandra, Hadoop, Storm and Spark).
Apply for this job

Or contact us with questions at careers@onshoremomentum.com.