Oops! Looks like this job has been filled. Check out these similar opportunities.
Our direct client in CT is looking for an experienced Data Engineer to work closely with Data Engineering Team and Business Users to continue to evolve the cloud enabled data ecosystem. In this role, you will be responsible for the design and development of data pipelines to support System Integrations as well as data pipelines that support our Analytical Platforms. The Data Engineer will work as part of a cross functional team to deliver integrations solutions for enterprise initiatives, including system to system integrations as well as Analytical solutions.
A successful candidate will deliver scalable and flexible solutions that conform to firm’s data design and governance strategies utilizing our AWS environment and Amazon Redshift data warehouse and other cloud-based technologies. Responsibilities include, but are not limited to:
- Design and develop data pipelines and integrations that are performant, scalable and flexible;
- Work with project teams to deliver system integration pipelines, ensuring a high degree of reliability and resiliency.
- Work with business users and project teams to provide SME guidance, finalize pipeline requirements and develop level of effort estimates.
- Develop automated testing and deployments scripts to support integrations and pipelines.
- Provide database support by coding utilities, responding to user questions, and resolving problems.
- Create and maintains documentation of the data pipelines including data flow and data lineage documentation.
- Responsible for tier 3 support and assisting our Operations Team as required to provide a great customer experience for our users.
- Responsible for overall data quality as it relates to our Enterprise Data Warehouse and associated Data Marts. Provide data analysis as required to troubleshoot data issues.
- Assist with integration related code reviews and mentorship of junior developers.
- Implement data governance and master data management principles as part of data pipeline development and delivery.
- 3 – 5 years’ experience in creating quality data pipelines and system integrations with at least 1 year of experience in a cloud environment.
- Bachelor’s degree in Computer or Information Science or related field or equivalent combination of education and experience.
- Development experience with Matillion for Redshift & AWS ELT services.
- High proficiency of relational database, NoSQL, ETL/ELT and data integration technologies.
- Expert level data analysis and data management skills with highly effective problem-solving skills that can transform problems into solutions.
- Knowledge of integration methodologies and best practices.
- Expert level SQL programming experience.
- Python programming experience. Node.js is a plus.
- Data Modeling skills and familiarity with BI Reporting Tools.
- Understanding of DevOps, Continuous Integration and Continuous Delivery technologies.
- Hands-on experience with Erwin Data Modeler & Erwin Mart Server.
Experience which would set the candidate apart:
- Amazon Web Services Certifications (Architect or Developer track).
- Practical understanding of public cloud architecture.
- Experience in the processing of Real-Time Data Streams for Operational Reporting in an AWS Environment.
Job ID: 4588