Data Engineer- AWS
- Full-time
Company Description
NEC Software Solutions (India)
On 1st July 2021, Rave Technologies became NEC Software Solutions India. This change brought us under the global NEC Corporation brand. We are proud to be part of an organisation with 122 years of experience in evolution with technology and innovation.
We have more than 30 years of experience in providing end to end IT services across the globe and have earned a reputation for delighting our customers by consistently surpassing expectations and helping them deliver robust, market-ready software products that meet the highest standards of engineering and user experience. Supported by more than 1300 exceptionally talented manpower, we are a hub for offshore support and technology services.
We work with diverse industry verticals which include publishing, media, financial services, retail, healthcare and technology companies around the world. Our customers range from two-person startups to $bn listed companies.
For more information, visit at www.necsws.com/india.
About NEC Corporation
NEC Corporation is a Japanese multinational information technology and electronics company, headquartered in Tokyo, Japan. It is recognised as a ‘Top 50 Innovative Company’ globally and the NEC Group globally provides “Solutions for Society” that promote the safety, security, fairness and equality of society. Their main goal is to help create a safer society with their innovations in technologies.
NEC Corporation has established itself as a leader in the integration of IT and network technologies while promoting the brand statement of “Orchestrating a brighter world.” NEC enables businesses and communities to adapt to rapid changes taking place in both society and the market as it provides for the social values of safety, security, fairness and efficiency to promote a more sustainable world where everyone has the chance to reach their full potential.
For more information, visit NEC at https://www.nec.com.
Job Description
Role Overview
We are seeking an AWS Data Engineer with 4–7 years of experience to design and build cloud-native data pipelines, contribute to innovation in data engineering practices, and collaborate across teams to deliver secure, scalable, and high-quality data solutions. This role is critical to enabling real-time insights and supporting our mission to streamline enterprise operations.
Key Responsibilities
- Develop, test, deploy, orchestrate, monitor, and troubleshoot cloud-based data pipelines and automation workflows in alignment with best practices and security standards.
- Collaborate with data scientists, architects, ETL developers, and business stakeholders to capture, format, and integrate data from internal systems, external sources, and data warehouses.
- Research and experiment with batch and streaming data technologies to evaluate their business impact and suitability for current use cases.
- Contribute to the definition and continuous improvement of data engineering processes and procedures.
- Ensure data integrity, accuracy, and security across corporate data assets.
- Maintain high data quality standards for Data Services, Analytics, and Master Data Management.
- Build automated, scalable, and test-driven data pipelines.
- Apply software development practices including Git-based version control, CI/CD, and release management to enhance AWS CI/CD pipelines.
- Partner with DevOps engineers and architects to improve DataOps tools and frameworks.
Basic Qualifications
- Bachelor’s Degree in Computer Science, Engineering, or related field.
- 4–7 years of experience in application development and data engineering.
- 3+ years of experience with big data technologies.
- 3+ years of experience with cloud platforms (AWS preferred; Azure or GCP also acceptable).
- Proficiency in Python, SQL, Scala, or Java (3+ years).
- Experience with distributed computing tools such as Hadoop, Hive, EMR, Kafka, or Spark (3+ years).
- Hands-on experience with real-time data and streaming applications (3+ years).
- NoSQL database experience (MongoDB, Cassandra) – 3+ years.
- Data warehousing expertise (Redshift or equivalent) – 3+ years.
- UNIX/Linux proficiency including shell scripting – 3+ years.
- Familiarity with Agile engineering practices.
- SQL performance tuning and optimization – 3+ years.
- PySpark experience – 2+ years.
- Exposure to process orchestration tools (Airflow, AWS Step Functions, Luigi, or KubeFlow).
Preferred Qualifications
- Experience with Machine Learning workflows.
- Exposure to Data-as-a-Service platforms.
- Experience designing and deploying APIs.
Additional Information
Excellent communication skills