Title:  Mts Software Engineer

Location: 

Bangalore, Karnataka, IN, 560071

Requisition ID:  68001

Job Summary

As an SDE at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. 


The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”


You will be working closely with a team of senior software developers and a technical director. You will be responsible for contributing to the design, and development and testing of code. The software applications you build will be used by our internal product teams, partners, and customers.


We are looking for a hands-on lead engineer who is familiar with Spark and Scala, Java and/or Python. Any cloud experience is a plus. You should be passionate about learning, be creative and have the ability to work with and mentor junior engineers.

Job Requirements

Your Responsibility 
•    Architect, Design and build our Big Data Platform, and understand scale, performance and fault-tolerance
•    Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. 
•    Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums 
•    Build and deploy products both on-premises and in the cloud
•    Develop and implement best-in-class monitoring processes to enable data applications meet SLAs 
•    Lead fast moving development teams using agile methodologies and have strong influencing and leadership skills
•    Conduct code reviews to ensure code quality, consistency and best practices adherence. 
•    Demonstrate favorable results through regular leadership and mentoring others

Our Ideal Candidate 
•    You have a deep interest and passion for technology
•    You love to code. An ideal candidate has a github repo that demonstrates coding proficiency
•    You demonstrate strong implementation aptitude to translate objectives into a scalable solution to meet the needs of the end customer while meeting deadlines. 
•    You have strong problem solving, and excellent communication skills
•    You are self-driven and motivated with the desire to work in a fast-paced, results-driven agile environment with varied responsibilities
•    You lead by example, demonstrating best practices for unit testing, CI/CD, performance testing, capacity planning, documentation, monitoring, alerting, and incident response.
•    You Serve as technical “go to” person for our core technologies 

Education

Your Qualifications 
•    9+ years of Big Data hands-on development experience 
•    Experience in building (Design, develop, implement and tune) distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built
•    Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. 
•    Experience in stream processing with knowledge on Kafka. 
•    Expertise in MPP architecture and knowledge of MPP engine (Spark, Impala etc). 
•    Expertise in Big Data Technologies, SQL and NoSQL Databases (like MongoDB, TSDB, Cassandra)
•    Expertise in one or more of Python/Java/Scala 
•    Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) 
 


Job Segment: Developer, Software Engineer, Engineer, Java, Technology, Engineering, Research