IT \ Engineering \ Technology Jobs
You’re looking for an opportunity to do something incredible, right? As an industry leader we’re dedicated to connecting the world in groundbreaking and entertaining ways. And that’s when our Technology team members can really shine, combining your passion for innovation and drive towards the future. From mobile apps to products and services, here’s your chance to create and develop life-changing innovations.
Duties: design and implement high performance, distributed computing
products using big data technologies such as Hadoop, Hive, and/or Spark.
Collaborate closely with data scientists, data engineers, and domain knowledge
experts. Use distributed environment technologies, including
Hadoop, HDFS (Hadoop Distributed File System), and MapReduce; big data
programming languages and technology, including Spark, Scala, HiveQl to write
code, complete programming and documentation, and perform testing and debugging
of applications; Python, Jupyter, scikit-learn, H2O to build machine learning
models; RDBMS (relational database management system), including Oracle,
Vertica, Teradata, Aster, and MySQL to develop query; and MongoDB and/or HBase
to develop NoSQL query. Apply knowledge in machine
learning and frameworks like H2O and scikit-learn. Apply knowledge in data
science algorithms and methods and implement production solutions with minimum
guidance from data scientists. Apply knowledge of running data science
algorithms using R, Python and Scala. Use leverages frameworks such as, but not
limited to, H2O, scikit-learn, and/or Pandas, as needed. Use big data
technologies to design, program, debug and implement new and/or existing data science
products and visualization solutions. Develop custom code in Python, Scala,
Java, SQL or other languages as necessary. Interact with data scientists and
subject matter experts to understand how data needs to be converted, loaded and
presented. Work with the raw data, cleanse it, and polish it to the format
where it can be consumed by data scientists to create critical insights. Write
documents as needed and work in a highly agile environment.
Requirements: Requires a Master’s degree, or foreign
equivalent degree, in Computer Science or Electrical Engineering and 2 years of
experience in the job offered or 2 years of experience using distributed
environment technologies, including Hadoop, HDFS (Hadoop Distributed File
System), and MapReduce; using big data programming languages and technology, including
Spark, Scala, HiveQl to write code, complete programming and documentation, and
perform testing and debugging of applications; using Python, Jupyter, scikit-learn,
H2O to build machine learning models; using RDBMS (relational database
management system), including Oracle, Vertica, Teradata, Aster, and MySQL to
develop query; and using MongoDB and/or HBase to develop NoSQL query.
AT&T is an Affirmative Action/Equal
Opportunity Employer, and we are committed to hiring a diverse and talented
Big Data Intern
Good experience overall. Colleagues were very helpful. Atmosphere was chilled out and there was no rush to complete the project. Was given ample amount of time to understand the project and contribute.
People are laid back and don't take initiative to do something new or optimize existing stuff. It's not primarily engineering company so if you know how to talk, you will go far longer in career.Current Employee - Principal Business Manager
- One Star Rating
- Two Star Rating
- Three Star Rating
- Four Star Rating
This is the life – the #LifeAtATT, that is. We’re creating what’s next and having a blast doing it. You’re looking for proof? Well, see for yourself.