This job has expired.
<b>Job Description:</b><br><br> We are seeking a Data Engineer for a very important client <br><br>Job responsibilities<br><br>* Execute software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems.<br><br>* Write secure and high-quality code and maintains algorithms that run synchronously with appropriate systems.<br><br>* Produce architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development.<br><br>* Apply knowledge of tools within the Software Development Life Cycle toolchain to improve the value realized by automation.<br><br>* Apply technical troubleshooting to break down solutions and solve technical problems of basic complexity.<br><br>* Gather, analyze, synthesize, and develop visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems.<br><br>* Proactively identify hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture.<br><br>* Contribute to software engineering communities of practice and events that explore new and emerging technologies.<br><br>* Add to team culture of diversity, equity, inclusion, and respect.<br><br><b>Requirement:</b><br><br>Required qualifications, capabilities, and skills.<br><br>* 4 to 7 years of Spark on Cloud development experience<br><br>* 4 to 7 years of strong SQL skills; Teradata is preference but experience in any other RDBMS<br><br>* Proven experience in understanding requirement related to extraction, transformation, and loading (ETL) of data using Spark on Cloud<br><br>* Formal training or certification on software engineering concepts and 3+ years applied experience.<br><br>* Ability to independently design, build, test, and deploy code Should be able to lead by example and guide the team with his/her technical expertise.<br><br>* Ability to identify risks/issues for the project and manage them accordingly.<br><br>* Hands-on development experience and in-depth knowledge of Java/Python, Microservices, Containers/Kubernetes, Spark, and SQL.<br><br>* Hands-on practical experience in system design, application development, testing, and operational stability<br><br>* Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages.<br><br>* Proficient in coding in one or more programming languages<br><br>* Experience across the whole Software Development Life Cycle<br><br>* Proven understanding of agile methodologies such as CI/CD, Applicant Resiliency, and Security<br><br>* Proven knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.)<br><br>Preferred qualifications, capabilities, and skills<br><br>* Knowledge about Data warehousing Concepts.<br><br>* Experience with Agile based project methodology.<br><br>* Ability to identify risks/issues for the project and manage them accordingly.<br><br>* Knowledge or experience on ETL technologies like Informatica or Ab-initio would be preferable<br><br>* People management skills would be given preference but is not mandatory.<br><br>MUST have<br>- teradata, DBMS knowledge<br>- Cloud knowledge - AWS preferably<br>- ETL knowledge<br>- CICD and data warehouse concept<br>- Java & Spark<br><br>NICE To Have<br>- Abinitio<br>- postgres DB knowledge<br>- python<br><br>Required<br>Datamart <br>Intermediate 7-8<br>postgresql <br>Intermediate 7-8<br>Data Warehousing Expert 9-10<br><br>Preferred<br>SQL <br>Intermediate 7-8<br>ETL Tools <br>Intermediate 7-8<br>#PandoPandoLogic. Keywords: Data Engineer, Location: Westerville, OH - 43082
Subscribe to job alerts and upload your resume!
*By registering with our site, you agree to our
Terms and Privacy Policy.
|
|
|