The Data Platform team develops and maintains centralized data infrastructure, supports ETL and ML operations, and leverages the power of data to create actionable insights to help Imprint grow profitably.
The ideal candidate will have a strong background in data maintenance, Data Lake S3, DBT, Terraform, SQL, Python, Snowflake, Fivetran ETL Tool, data modeling, Jira, and AWS.
Design, develop, and maintain scalable data pipelines and ETL processes using Fivetran and other ETL tools.
Implement and manage data models using DBT to ensure data accuracy and consistency.
Write efficient, maintainable SQL code to extract, transform, and load (ETL) data from various sources.
Develop and maintain data infrastructure on AWS, including S3, Redshift, and other relevant services.
Collaborate with data analysts, data scientists, and other stakeholders to understand data requirements and deliver solutions.
Perform data quality checks and ensure data integrity across different platforms.
Monitor and optimize the performance of data systems and pipelines.
Utilize Python for data manipulation, automation, and integration tasks.
Track and manage project tasks and progress using Jira.
Stay up-to-date with emerging data engineering technologies and best practices.
Proven experience as a Data Engineer or in a similar role.
Experience or familiar with development tools: Airflow (Python), Snowflake (SQL), Github (CI/CD)
Experience with DBT (Data Build Tool) for data transformation and modeling.
Hands-on experience with Fivetran or other ETL tools.
Proficiency in Python for data-related tasks.
Experience with AWS services such as S3, Redshift, Lambda, and Glue.
Solid understanding of data modeling concepts and techniques.
Excellent problem-solving skills and attention to detail.
Strong communication skills and the ability to work collaboratively in a team environment.