Job Title: Sr. Platform Engineer (Hadoop Administration, Clouderia)
Duration: 12 months
Location: Toronto
SUMMARY OF DAY TO DAY RESPONSIBILITIES:
SUMMARY:
The Senior Engineer is part of the Infrastructure and Platform Design team within Enterprise Information Management. Reporting to the Manager of IPD Tools team; this role is responsible for designing, building and testing an automated and resilient Big Data infrastructure and platform for the Information Excellence program. This position must work proactively and effectively within EIM, ITS and other technology and business partners to provide technical direction, support, expertise and best practices in the systems and infrastructure that encompass the Information Excellence platform.
Accountabilities
• Provide technical design leadership and oversight to the EIM Infrastructure & Platform Design team.
• Provide guidance to all delivery teams, ensuring all physical designs meet the IPD team's strict fault-tolerant, scalable guidelines.
• Leads or actively participates in POC/Design sessions & provides detailed template guidelines for junior engineers to follow.
• Contributes to the development of the Hadoop platform's technical design and capability roadmap, including clearly documenting the various interdependencies to ensure all platform users can design and implement components/processes and capabilities in a seamless and expedient manner
• Responsible for analyzing business requirements and recommending optimal solutions within technology architecture.
• Develop and document system and infrastructure configurations utilizing the SDLC methodology.
• Participate in the preparation of system implementation plans and support procedures.
• Provide ongoing system automation management support to Information Excellence teams and related business partners.
• Performs regular and frequent Infrastructure risk assessment and proactively addresses risks.
• Accountability for platform performance and recommendations for platform performance tuning to meet the various delivery teams' non-functional requirements.
MUST HAVE:
1.) Experience in system administration, information management, system automation and testing. – 5 years
2.) Big data and Hadoop experience – 3 years
3.) Strong proficiency in Linux shell scripting and system administration – 3 years
4.) Hadoop administration – 1 year
5.) Java, virtual environments, configuration and deployment automation – 1 year
6.) Experience with information technology; data and systems management – 10 years
7.) Tools and Utilities – Jenkins, Hadoop development tools & utilities (Pig, Hive, Java, Sqoop, Flume, etc.) and CDH, Podium, Talend, Oozie, Hive – 1+ year
NICE TO HAVE
1.) Knowledge of RESTful API-based web services
2.) Former experience
3.) Cloudera experience or certification
4.) Enterprise environment