Internship: Data Engineer
- Intern
Company Description
Avery Dennison Corporation (NYSE: AVY) is a global materials science and digital identification solutions company. We are Making Possible™ products and solutions that help advance the industries we serve, providing branding and information solutions that optimize labor and supply chain efficiency, reduce waste, advance sustainability, circularity and transparency, and better connect brands and consumers. We design and develop labeling and functional materials, radio-frequency identification (RFID) inlays and tags, software applications that connect the physical and digital, and offerings that enhance branded packaging and carry or display information that improves the customer experience. Serving industries worldwide — including home and personal care, apparel, general retail, e-commerce, logistics, food and grocery, pharmaceuticals and automotive — we employ approximately 35,000 employees in more than 50 countries. Our reported sales in 2025 were $8.9 billion. Learn more at www.averydennison.com.
AVERY DENNISON IS PROUD TO BE CERTIFIED GREAT PLACE TO WORK IN AUSTRALIA, CHINA, INDIA, INDONESIA, JAPAN, SINGAPORE, THAILAND AND VIETNAM, AND RECOGNIZED AS ONE OF THE BEST COMPANIES TO WORK FOR IN ASIA IN GREATER CHINA AND VIETNAM.
AVERY DENNISON IS AN EQUAL OPPORTUNITIES EMPLOYER.
Job Description
ABOUT YOUR ROLE:
The Data Engineer Intern will be a vital member of the Operations Team, contributing significantly to Avery Dennison's operations transformation, specifically supporting improvements within our China operations. A core focus of this role is to assist in the design, development, and maintenance of the scalable data pipelines that transform raw data into actionable insights. You’ll work closely with Data Analysts to ensure that our data infrastructure is robust, efficient, and reliable.
YOUR RESPONSIBILITIES WILL INCLUDE:
Assist in the end-to-end migration of large-scale datasets from legacy servers to new infrastructure; implement validation checks to ensure data consistency and zero loss during the transition.
Support the review and profile of existing ETL(Extract, Transform, Load) pipelines to identify bottlenecks; implement logic improvements to reduce data latency and enhance overall execution speed.
Maintain and optimize data warehouse schemas to ensure high availability, scalability, and query performance for downstream users.
Support the end-to-end lifecycle of data preparation—including screening, cleaning, and structural processing—to create high-quality datasets specifically optimized for AI model grounding and machine learning applications.
Support the Data Analyst in engaging with business stakeholders to audit technical pain points; assist in translating these "real-world" problems into technical specifications.
Partner closely with the Data Analyst to architect and deliver clean, structured datasets that serve as the "single source of truth" for high-level business reporting and digital transformation initiatives.
Qualifications
WHAT WE WILL BE LOOKING FOR IN YOU
Students pursuing an technical discipline degree (Computer Science / Software Engineering / Data Science etc)
Intermediate proficiency in SQL/Python with an eagerness to learn database schemas and ETL logic.
A focus on ensuring data consistency when assisting with moving datasets across servers.
An interest in learning to identify performance bottlenecks and reduce pipeline latency.
Ability to listen to business needs and assist in translating them into initial technical considerations.
A deep interest in how high-quality, structured data serves as the foundation for AI model grounding.
Additional Information
All your information will be kept confidential according to EEO guidelines.