Data Engineer x 2
Typical Day in Role:
• Migrating on-premises applications and related data to the cloud
• Design and develop an automated pipeline in a cloud environment
• Designing, developing, and deploying new applications directly in the cloud
• Delivering end-to-end automation of deployment, monitoring and infrastructure management
• Understand static code analysis, unit testing and test-driven development, security testing and automated test frameworks
• Work with large data sets, experience working with distributed computing (MapReduce, Hadoop, Hive, Apache Spark, etc.)
• Conduct ETL, SQL and DB performance tuning, troubleshooting, support, and capacity estimation to ensure highest data quality standards
• Conduct dimensional modelling, metadata management, data cleaning and conforming, and warehouse querying
• Profile and analyze source data to identify opportunities for data quality interventions
• Work with business stakeholders to understand problem statement and develop machine learning algorithms and prototype them for execution in Hadoop and cloud environment
• Use sound agile development practices (code reviews, testing, etc.) to develop and deliver data products
• Provide day-to-day support and technical expertise to both technical and non-technical teams
• Translate business needs into technical requirements
• Participates in knowledge transfer within the team and business units and identifies and recommend opportunities to enhance productivity, effectiveness, and operational efficiency of the business unit and/or team
• Monitors project progress by tracking activity; resolving problems; publishing progress reports; recommending actions.
Candidate Requirements/Must-Have skills:
1. 10+ years of hands-on development experience with GCP technology
2. 10+ years of hands-on experience with ELT/ ETL tools – DBT DataStage required
3. 10+ years of hands-on experience with big data technology – Spark and SQL required
4. Prior experience and understanding of Industry Data Quality process and practices
5. Experience with data preprocessing, feature engineering, anomaly, and outlier detection
Nice-To-Have Skills:
• Previous Banking experience and knowledge of Financial Services/Risk / Regulatory business processes.
• Proficiency with relational databases (Oracle, DB2, Redshift, etc.).
• Proficient in a variety of languages: Python, R, Scala, Java.
• API development experience.
• Familiarity with container environments: Docker, Openshift.
• Familiarity with DevOps processes, pipelines, and tooling.
Soft Skills:
• Very detail oriented and skilled in summarizing and gleaning insights from large amounts of information Self-motivated to solve problems and able to work collaboratively with other team members
• Strong oral and written communication skills and comfortable presenting to a variety of audiences
• Open mindset, ability to quickly adapt new technologies
Education:
• Post-secondary degree in a technical field such as computer science, computer engineering or related IT field required.