This position requires frequent collaboration with developers, architects, data product owners, and source system teams. The ideal candidate is a versatile professional with deep expertise spanning data engineering, software architecture, data analysis, visualization, BI tools, relational databases, and data warehouse architecture across traditional and cloud environments. Experience with emerging AI technologies, including Generative AI, is highly valued.
Key Roles and Responsibilities
Lead the end-to-end design, architecture, development, testing, and deployment of scalable Data & AI solutions across traditional data warehouses, data lakes, and cloud platforms such as Snowflake, Azure, AWS, Databricks, and Delta Lake.
Architect and build secure, scalable software systems, microservices, and APIs leveraging best practices in software engineering, automation, version control, and CI/CD pipelines.
Develop, optimize, and maintain complex SQL queries, Python scripts, Unix/Linux shell scripts, and AI/ML pipelines to transform, analyze, and operationalize data and AI models.
Incorporate GenAI technologies by evaluating, deploying, fine-tuning, and integrating models to enhance data products and business insights.
Translate business requirements into robust data products, including interactive dashboards and reports using Power BI, Tableau, or equivalent BI tools.
Implement rigorous testing strategies to ensure reliability, performance, and security throughout the software development lifecycle.
Lead and mentor engineering teams, fostering collaboration, knowledge sharing, and upskilling in evolving technologies including GenAI.
Evaluate and select optimal technologies for platform scalability, performance monitoring, and cost optimization in both cloud and on-premise environments.
Partner cross-functionally with development, operations, AI research, and business teams to ensure seamless delivery, support, and alignment to organizational goals.
Key Competencies
Extensive leadership and strategic experience in full software development lifecycle and enterprise-scale data engineering projects.
Deep expertise in relational databases, data marts, data warehouses, and advanced SQL programming.
Strong hands-on experience with ETL processes, Python, Unix/Linux shell scripting, data modeling, and AI/ML pipeline integration.
Proficiency with Unix/Linux operating systems and scripting environments.
Advanced knowledge of cloud data platforms (Azure, AWS, Snowflake, Databricks, Delta Lake).
Solid understanding and practical experience with Traditional & Gen AI technologies including model development, deployment, and integration.
Familiarity with big data frameworks and streaming technologies such as Hadoop, Spark, and Kafka.
Experience with containerization and orchestration tools including Docker and Kubernetes.
Strong grasp of data governance, metadata management, and data security best practices.
Excellent analytical, problem-solving, and communication skills to articulate complex technical concepts and business impact.
Ability to independently lead initiatives while fostering a collaborative, innovative team culture.
Desired knowledge of software engineering best practices and architectural design patterns.
Required/Desired Skills
RDBMS and Data Warehousing — 12+ years (Required)
SQL Programming and ETL — 12+ years (Required)
Unix/Linux Shell Scripting — 8+ years (Required)
Python or other programming languages — 6+ years (Required)
Generative AI (model development, deployment, integration) — 3+ years (Desired)
Big Data Technologies (Hadoop, Spark, Kafka) — 3+ years (Desired)
Containerization and Orchestration (Docker, Kubernetes) — 2+ years (Desired)
Data Governance and Security — 3+ years (Desired)
Software Engineering and Architecture — 4+ years (Desired)
Education & Experience
Bachelor’s degree (BS/BA) in Computer Science, Scientific Computing, or a related field is desired.
Relevant certifications in data engineering, cloud platforms, or AI technologies may be required or preferred.
13+ years of related experience is the minimum; however, the ideal candidate will have extensive experience as outlined above.
#DataEngineering
Weekly Hours:
40
Time Type:
Regular
Location:
Bangalore, Karnataka, India
It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
This one's for the grads and early careerists: Our leading internship and development program recruiters weigh in on how to prepare for and handle your interview.
Learn more
September 19, 2024ArticleCareer AdviceRelated Content
T&T’s India Development Centers (IDC) plays a pivotal role in AT&T’s connectivity strategy, and no one is better suited to speak to that importance more than Santosh Bijur, Vice President of the India Development Center
In our India Development Center (IDC), we’re building a talented technology team. By offering essential resources and the chance to work alongside industry leaders, our goal is to support the next generation of innovators in India.