Company logo

Data Engineer 1

Bread Financial

Full-time

On-site

Bangalore, Karnataka, India

₹1,000,000.00

Job Summary

The Data Engineer 1 works on different projects of data engineering to support the use cases, data ingestion pipeline and identify potential process or data quality issues. The team also supports marketing analytic teams with analytical tools that enable our analytics and business communities to do their job easier, faster and smarter. The team brings together data from different internal & external partners and builds a curated Marketing analytics focused data & tools ecosystem. The Data Engineer plays a crucial role in building this ecosystem depending on the Marketing analytics communities need.

Job Description

Essential Job Functions

Collaboration - Collaborates with internal/external stakeholders to manage data logistics – including data specifications, transfers, structures, and rules. Collaborates with business users, business analysts and technical architects in transforming business requirements into analytical workbenches, tools and dashboards reflecting usability best practices and current design trends. Demonstrates analytical, interpersonal and professional communication skills. Learns quickly and works effectively individually and as part of a team. 

Process Improvement - Access, extract, and transform Credit and Retail data from a variety of sources of all sizes (including client marketing databases, 2nd and 3rd party data) using Hadoop, Spark, SQL, Big data technologies etc. Provides automation help to analytical teams around data centric needs using orchestration tools, SQL and possibly other big data/cloud solutions for efficiency improvement.

Project Support- Support Sr. Specialist and Specialist in new analytical proof of concepts and tool exploration projects. Effectively manage time and resources in order to deliver on time/correctly on concurrent projects. Involved in creating POCs to ingest and process streaming data using Spark and HDFS.

Data and Analytics - Answer and trouble shoot questions about data sets and analytical tools; Develop, maintain and enhance new and existing analytics tools to support internal customers. Ingest data from files, streams and databases then process the data with Python and Pyspark in order to store data to Hive or NoSQL database. Manage data coming from different sources and involved in HDFS maintenance and loading of structured and unstructured data. Apply knowledge in Agile Scrum methodology that leverages the Client Bigdata platform and used version control tool Git. Import and export data using Sqoop from HDFS to RDBMS and vice-versa. Demonstrate an understanding of Hadoop Architecture and underlying Hadoop framework including Storage Management. Create POCs to ingest and process streaming data using Spark and HDFS. Work on back-end using Scala, Python and Spark to perform several aggregation logics

Technical Skills - Expert in writing complicated SQL Queries and database analysis for good performance. Experience in working on Microsoft Azure Services like ADLS/Blob Storage solutions, Azure DataFactory, Azure Functions and Databricks. Utilize basic knowledge of Rest API for designing networked applications

**Reports to: **

Working Conditions/ Physical Requirements: Normal office environment 

Direct Reports: 0

Minimum Qualifications

Bachelor’s Degree in Computer Science or Engineering,

0 to 3 years in Data & Analytics

This job description is illustrative of the types of duties typically performed by this job. It is not intended to be an exhaustive listing of each and every essential function of the job. Because job content may change from time to time, the Company reserves the right to add and/or delete essential functions from this job at any time.

Job Family:

Data and Analytics

Job Type:

Regular