Location: WFH, Toronto
Contract Duration: 4 months
Your Business Line: Creating enterprise reusable data across business lines and projects. Creating analytics ready data that models and ready to use. Getting ready for conception for the analytics team is the primary goal. Data engineer, ETL Architecture to create enterprise data assets.
Story Behind the Need
Project Summary: The organization at a bank is focused on building high-value solutions across the bank. Organization consists of accomplished professionals who blend expertise in computer science, data, machine learning, and banking to deliver innovative projects that drive the business. A key part of success for the group is having scalable, fast, and continuously evolving data processes, and this is the responsibility of the Data Engineering team. As an integral part of the team, the Data Engineer will be responsible for all technical aspects of the data pipelines that power advanced and descriptive analytics work.
Candidate value Proposition:
As Canada's International Bank, client is a diverse and global team. They speak more than 100 languages with backgrounds from more than 120 countries. Their employees are committed to a superior customer experience and use the Bank’s guiding sales practice principles to ensure they act with honesty and integrity.
Client has a dynamic work environment that will provide you with exposure to leaders in data & analytics, the opportunity to continuously learn, and support to be innovative.
• Work with business teams across the organization (Canadian Banking, Intl. Banking, Treasury) to understand their data-driven goals
• Partner with advanced analytics teams to build powerful data assets and processes
• Be the subject matter expert on data driven processes, visualization workflows, and tools with which to analyze data
• Build automated data management and data quality capabilities into projects
• Build re-usable components to scale development across the organization, collaborating with other technical teams to streamline high-value data to the Analytics platforms
• Build re-usable managed datasets for consumption by both revenue generating analytics and operational corporate functions (e.g. finance, risk, etc.)
• Experiment & learn!
Qualifications/Must Have skills:
1. At least 7 years of hands on programming experience with tools such as Python, Scala, and /or Java.
2. Comfortable working in database environments. (databases like PostgreSQL, HDFS/Hive)
3. Familiarity with agile tooling to efficiently build as a team: git, Jira, Confluence, etc.
4. High aptitude for diving in and picking up new technologies as required
5. At least 5 years of hands on experience building data pipelines
Nice to have:
1. Experience in finance is an asset
2. Experience with Spark is preferred
3. Experience with PostgreSQL, HDFS/Hive, MinIO an asset
4. Exposure to Kubernetes, Airflow, Jenkins is an asset
5. Familiarity with data management capabilities (data registration, data quality, etc.) and ability to learn advanced data management toolsets
Bachelor’s degree in Computer Science or Engineering, or related field or relevant experience