JPMorgan Chase Senior Software Engineer- Machine Learning in Jersey City, New Jersey
J.P. Morgan is a leader in financial services, offering innovative and intelligent solutions to clients in more than 100 countries with one of the most comprehensive global product platforms available. We have been helping our clients to do business and manage their wealth for more than 200 years and we keep their interests foremost in our minds at all times. This combination of product strength, intellectual capital and character sets us apart as an industry leader. J.P. Morgan is part of J.P. Morgan Chase & Co. (NYSE: JPM), a global financial services firm with assets of $2.0 trillion.
The Information Architecture group develops and drives solutions at the firm-wide level. The group’s main purpose is to allow the firm to use its data assets as a strategic competitive advantage in the marketplace. This is a highly visible group provides technology solutions directly to the Chief Data Officer and the Chief Technology Officer. This position will be in the Data Visibility and Discovery group which is instrumental understanding what business data the firm owns, where it is located, and how it flows through different systems.
We are looking for a strong Java Developer with deep data management experience to join a team of talented technologists in the Information Architecture organization. This highly visible and exciting role will be a critical member of a small team building out the data discovery, classification, and quality platform for the entire firm. Data discovery will allow JP Morgan to understand the semantic content of its data which will allow the business to use its data more strategically, tighten security, and reduce data duplication. The successful candidate will have a passion for technology with an emphasis on understanding the data landscape in large and complex organizations. They will also thrive in dynamic rapidly changing environments and will enjoy communicating and presenting at all levels of the organization including senior management.
Lead in the set up and management of Global ID’s software in support of Line of Business / Corporate goals of inventorying PII/Confidential data, sourcing reference data, and mapping the location of key business entities
Build a centralized and “lights-out” automation, process control, and scheduler to manage the thousands of data discovery jobs running at any given time.
Take the lead in the implementation of the data discovery platform both on a Hadoop grid and potentially hosted on Amazon cloud
Develop machine learning and NLP algorithms to further refine data classification accuracy and scalability
Develop customized code in order to extend the capability of data discovery engine and integrate the platform into other firm wide systems
Evangelize the value of data governance and data visibility throughout the firm
Provide technical assistance and guidance to business groups using data management tools
Build BI reports and visualizations
Participate in releases and debug issues
Perform capacity planning, performance tuning, and infrastructure design
Work closely with multiple stakeholders including operate, development groups, business users, and senior managers to ensure that the data discovery agenda is successful
Required Skills and Attributes
5 years of experience with process orchestration, batch automation, and process scheduling
5 years of experience with distributed asynchronous computing architectures
5 years of development experience (threading, concurrency, messaging)
5 years of experience with RDMS and strong SQL and PL/SQL coding skills
3-5 years of scripting experience (any of java, Perl, shell, python, etc)
3-5 years of experience of integration and process flow technologies (BMPn, Mule, ESB, REST, SOAP, XML/XSD, BPEL)
Expert in debugging and analysing complex software systems, including a willingness to deep-dive into all layers of the technology stack.
Expert in java performance tuning. (SQL query tuning would be a big plus)
Desired Additional Skills
Experience with Hadoop technologies including Cloudera, Hive, Impala, yarn, avro
3 years of experience with cloud development including Amazon AWS would be a strong plus
3 years of experience with machine learning techniques including statistical models and neural networks
Knowledge of data quality systems including Informatica and AbInitio
5 years of large data warehousing experience (modelling, tuning, ETL, etc)
Experience with data modelling and tools like Erwin
Data visualization experience – Tableau and QlikView preferred
Experience in infrastructure technologies (Redhat Linux, SAN/NAS, networking, load balancing, etc)
Working knowledge of unit, technical, integration, and user acceptance testing
Working knowledge of capacity planning and performance testing tools
JPMorgan Chase is an equal opportunity and affirmative action employer Disability/Veteran.