Skip to main content
Technology

Lead Cybersecurity - Application Security Architect – AI Models, Frameworks & Implementation

Atlanta, Georgia

Apply now

This position requires office presence of a minimum of 5 days per week and is only located in the location(s) posted. No relocation is offered.

Join AT&T and reimagine the communications and technologies that connect the world. Our Chief Security Office ensures that our assets are safeguarded through truthful transparency, enforce accountability and master cybersecurity to stay ahead of threats. Bring your bold ideas and fearless risk-taking to redefine connectivity and transform how the world shares stories and experiences that matter. When you step into a career with AT&T, you won’t just imagine the future-you’ll create it.

We are seeking an Application Security Architect to secure the design, development, integration, and operation of AI/ML-enabled applications, including LLMs, agent-based systems, RAG pipelines, model-serving APIs, and AI orchestration frameworks, as well as advance the vulnerability management program as it relates to AI based vulnerabilities. This role combines application security architecture with AI security engineering to reduce risk across the full AI lifecycle – from data ingestion and model integration to inference-time protections and production governance – and lead AI Security from a vulnerability management and risk-reduction perspective. This role is primarily focused on identifying, assessing, prioritizing, and helping remediate security weaknesses across AI-enabled applications, services, models, and integration patterns in order to reduce exploitability and accelerate remediation.

The ideal candidate combines strong Application Security expertise with practical experience securing AI/ML systems, LLM-based applications, agentic workflows, and model integrations. This individual should understand both traditional AppSec principles and AI-specific attack patterns and be able to apply that knowledge to improve vulnerability discovery, risk triage, security testing, architecture review, and remediation guidance across the AI lifecycle.

We are looking for a technically minded, hands-on security architect who can evaluate AI implementations for real security risk, define effective controls, partner with engineering teams to remediate issues, and improve how AI-related vulnerabilities are managed across development and production environments. The right candidate will also bring coding aptitude and implementation experience to support secure development workflows, integrate security checks and automation, implement security controls in applications and pipelines, and build practical solutions where necessary to improve coverage, consistency, and speed.

Job Summary:

The Application Security Architect is responsible for defining and driving secure-by-design approaches for AI-enabled applications and services. This role focuses on protecting the full lifecycle of AI/ML systems, including:

  • LLM-based applications
  • Agentic workflows
  • Retrieval-augmented generation (RAG)
  • Model APIs and inference services
  • Training/fine-tuning pipelines
  • Third-party AI integrations and SaaS capabilities

The architect will work closely with application teams, enterprise architects, AI/ML engineers, developers, cloud/platform teams, and security stakeholders to establish secure patterns, identify AI-specific risks, implement technical controls, and support responsible adoption of AI capabilities across the organization.

Success in this role requires:

  • Deep understanding of application security architecture
  • Strong knowledge of AI/ML technologies, frameworks, and deployment models
  • Hands-on experience with AI security controls and implementation
  • Ability to code, automate, integrate, and validate technical solutions
  • Practical familiarity with AI security standards and threat frameworks
  • Hands-on familiarity with source control, repository workflows, CI/CD integration, and artifact/package management, including platforms such as GitHub and JFrog

Detailed Job Description:

This role is centered on securing AI-enabled applications and platforms through a combination of application security architecture, AI threat modeling, technical design review, secure implementation guidance, and control validation.

You will help define how AI solutions are securely adopted and deployed, whether they are built in-house, fine-tuned from existing models, or integrated through third-party APIs and enterprise AI platforms. This includes securing AI-related application flows such as:

  • Prompt handling
  • Model invocation
  • Data retrieval and context injection
  • Plugin/tool calling
  • Agent permissions and action boundaries
  • Output validation and post-processing
  • API exposure and service-to-service integration

You will assess and mitigate AI-specific threats such as:

  • Prompt injection
  • Jailbreaking
  • Data poisoning
  • Training-data leakage
  • Sensitive data exposure
  • Model inversion and extraction
  • Excessive agency in autonomous workflows
  • Unauthorized model/API access
  • Abuse of model-serving endpoints

The right candidate will bring an AppSec mindset first—understanding secure design, trust boundaries, authn/authz, API risk, abuse cases, and vulnerability management—while also possessing hands-on familiarity with AI ecosystems, orchestration frameworks, model integration patterns, and AI deployment architectures.

Key Responsibilities:

AI Security Architecture & Design

  • Design, review, and validate secure architectural patterns for AI/ML and LLM-enabled applications, including locally hosted models, cloud-native AI services, API-based model access, RAG systems, and agent-based workflows.
  • Define secure reference architectures for AI integrations across applications, services, and platforms.
  • Ensure security is embedded into AI solution design from the start, including trust boundaries, identity controls, data flows, model access, and output handling.
  • Advise teams on secure use of frameworks such as Azure AI Foundry, LangChain, Semantic Kernel, OpenAI/Azure OpenAI integrations, and similar orchestration or inference technologies.

AI Threat Modeling & Security Reviews

  • Lead threat modeling sessions for AI-enabled applications and platforms to identify abuse cases, architectural weaknesses, and control gaps.
  • Assess risks such as prompt injection, model evasion, data poisoning, jailbreaks, model inversion, model extraction, tool misuse, and unauthorized privilege escalation through agent workflows.
  • Conduct technical security reviews of AI applications, integrations, and architectures with clear remediation recommendations and risk prioritization.
  • Translate AI threat scenarios into practical mitigations that development and engineering teams can implement.

Guardrails, Controls & Secure Implementation

  • Define and implement AI-specific security guardrails, including prompt/input filtering, context validation, output sanitization, response validation, policy enforcement, model/tool access restrictions, and sensitive data handling controls.
  • Recommend and help implement controls for human-in-the-loop approvals, action scoping, tool permissions, content safety, and unsafe output suppression in agentic or autonomous systems.
  • Validate that security controls are effective in real usage scenarios and resilient against adversarial behavior.
  • Support application teams in integrating AI protections into code, middleware, APIs, and orchestration frameworks.

MLSecOps / DevSecOps for AI

  • Embed security into the AI/ML development lifecycle by integrating controls into CI/CD and ML pipelines, including data ingestion, model packaging, deployment, and runtime validation.
  • Help implement security scanning and policy checks for models, datasets, dependencies, containers, APIs, infrastructure-as-code, and deployment pipelines.
  • Define secure operational patterns for model versioning, rollback, promotion, and change management.
  • Partner with engineering teams to automate repeatable security checks and guardrails across AI-enabled delivery pipelines.

Software Engineering & Repository Security

  • Write, review, and where needed help implement code to support AI security controls, automation, integrations, and remediation activities.
  • Work within standard software development workflows using source control platforms such as GitHub, including branch management, pull requests, code review, and CI/CD integration.
  • Partner with engineering teams to secure repositories, workflows, secrets handling, dependency use, and release processes.
  • Support secure management of artifacts, packages, containers, and model-related assets through repositories and platforms such as JFrog Artifactory.
  • Help establish secure practices for versioning, promotion, provenance, and lifecycle management of code, models, packages, and deployment artifacts.

AI Incident Readiness & Response

  • Develop AI-focused incident response guidance and playbooks for scenarios such as prompt-based abuse, sensitive data leakage, poisoning, model misuse, or unauthorized access to AI components.
  • Support investigations involving AI-enabled applications by providing architectural context, attack-path analysis, and mitigation recommendations.
  • Help teams improve resilience and detection capabilities based on lessons learned from testing, incidents, and near misses.

Vulnerability Management for AI Systems

  • Establish processes for identifying, assessing, prioritizing, and tracking vulnerabilities or control gaps in AI-enabled applications, model-serving endpoints, datasets, orchestration layers, and supporting infrastructure.
  • Drive risk-based prioritization of AI security issues, balancing exploitability, exposure, data sensitivity, and business impact.
  • Support remediation efforts by recommending practical fixes such as architectural changes, guardrail improvements, retraining/tuning strategies, or access-control enhancements.
  • Help define how AI-related findings are documented, triaged, and governed within broader AppSec and vulnerability management workflows.

Application Security & Vulnerability Management Focus

  • Secure the data supply chain for AI systems, including training, tuning, embeddings, vector stores, and contextual retrieval components.
  • Protect against prompt injection and indirect prompt injection through layered controls, trust-boundary design, input validation, and context isolation strategies.
  • Secure API endpoints serving AI predictions or orchestration actions using strong identity, access control, rate limiting, abuse prevention, and logging/traceability.
  • Focus on risk reduction and control effectiveness for AI vulnerabilities, including cases where mitigation relies on architecture, policy, or model behavior controls rather than traditional patching.
  • Ensure secure model and artifact versioning, provenance awareness, and rollback capabilities in cases of drift, poisoning, or faulty releases.
  • Apply traditional AppSec principles—such as secure design, authn/authz, secrets protection, input handling, dependency security, and least privilege—to AI-enabled systems and integrations.

Qualifications / Requirements / Skills:

  • 7+ years of experience in application security, product security, security architecture, or secure software engineering, with at least 2–3 years focused on AI/ML or LLM security, AI-enabled application architecture, or adversarial AI security.
  • Strong background in application security principles and methodologies, including secure design review, threat modeling, vulnerability management, API security, authn/authz, and secure SDLC practices.
  • Demonstrated experience securing AI/ML systems, LLM-enabled applications, or AI integration patterns in enterprise or production environments.
  • Practical experience with AI models, frameworks, and orchestration technologies, such as Azure AI Foundry, Azure OpenAI/OpenAI APIs, LangChain, Semantic Kernel, Hugging Face, TensorFlow, PyTorch, or similar ecosystems.
  • Hands-on experience implementing security controls for AI use cases, including prompt filtering, output validation, model access controls, data protections, agent/tool guardrails, and monitoring.
  • Strong understanding of AI-specific threats such as prompt injection, jailbreaks, model inversion, data poisoning, model extraction, insecure plugins/tools, and sensitive data leakage.
  • Demonstrated ability to write, review, and implement code when needed, including scripting, prototyping, automation, integrating security controls into applications and CI/CD pipelines, and building practical solutions to support AppSec and AI security use cases.
  • Proficiency in one or more programming/scripting languages such as Python, JavaScript/TypeScript, Go, or Bash; Python strongly preferred, with the ability to work comfortably in existing codebases, automation scripts, and integration layers.
  • Experience working with cloud-native platforms and services (Azure preferred; AWS/GCP also valuable), including APIs, containers, IAM, secrets management, logging, and deployment pipelines.
  • Strong familiarity with AI and AppSec frameworks such as OWASP LLM Top 10, NIST AI RMF, MITRE ATLAS, and secure architecture principles for AI systems.
  • Practical experience working with source code repositories and modern development workflows, including branching, pull requests, code review, repository hygiene, and CI/CD integration.
  • Experience using or supporting GitHub-based development environments, including repository management, Git-based workflows, and security integration into build and deployment pipelines.
  • Familiarity with artifact, package, and binary repository management, including platforms such as JFrog Artifactory, to support secure handling of dependencies, build artifacts, containers, models, or related software assets.
  • Strong communication skills with the ability to work across engineering, architecture, data science, security, risk, and leadership stakeholders.

Education Requirements:

  • Bachelor’s degree in Computer Science, Cybersecurity, Information Security, Software Engineering, Data Science, or a related technical field; or equivalent practical experience.
  • Master’s degree in a relevant field is a plus, especially where focused on security, AI/ML, software engineering, or systems architecture.
  • Equivalent combination of education, hands-on experience, security engineering, and AI implementation experience will be considered in lieu of formal advanced degrees.

Nice-to-Haves / Preferred or Desired Skills:

  • Experience securing agentic AI systems, tool-calling architectures, or autonomous workflows with scoped permissions and human-approval gates.
  • Experience with RAG security, including vector database protections, retrieval trust boundaries, document sanitization, and context isolation.
  • Hands-on experience evaluating or red-teaming AI systems for jailbreaks, prompt injection, leakage, or unsafe action chaining.
  • Experience building internal security tooling, validation harnesses, test frameworks, or policy enforcement layers for AI-enabled applications.
  • Familiarity with MLOps/MLSecOps platforms, model registries, feature stores, and secure model lifecycle management.
  • Experience with enterprise AI governance, model risk management, or responsible AI control frameworks.
  • Relevant certifications or demonstrable equivalent experience in cloud security, application security, AI/ML security, or secure architecture.
  • Experience implementing or reviewing GitHub Actions, repository protections, branch controls, and security checks in GitHub-based CI/CD workflows.
  • Experience with JFrog Artifactory/Xray or similar tooling for artifact, package, container, and dependency management.
  • Experience contributing directly to shared codebases, internal tooling, or developer security integrations in enterprise software environments.
  • Experience securing software supply chain components, including repositories, dependencies, packages, containers, and build provenance.

Why This Role is Unique:

This role is unique because it sits at the intersection of Application Security, AI/ML architecture, and hands-on security engineering. It is not a traditional security governance role, and it is not purely an AI engineering role. We are looking for someone who can bridge both worlds: a candidate who understands how applications are built and attacked, how AI systems are integrated and abused, and how to translate that into secure architecture, practical controls, and scalable implementation patterns.

This role is an opportunity to shape how AI is adopted securely across the organization by influencing architecture, standards, implementation, and operational guardrails from the ground up. The ideal candidate will help define the future state of AI-enabled application security while also remaining close enough to the technology to validate designs, code solutions where needed, and solve real-world security problems.

Typical Goals (30/60/90 Days):

30 Days

  • Inventory current AI-enabled applications, model integrations, third-party AI services, and major use cases.
  • Build an initial view of the organization’s AI attack surface and identify the highest-risk applications or integration patterns.
  • Meet with key stakeholders across AppSec, architecture, AI/ML, engineering, platform, and risk functions to understand current capabilities and gaps.
  • Review existing standards, deployment patterns, and known AI-related risks.

60 Days

  • Establish and socialize a lightweight AI threat modeling and secure architecture review process.
  • Publish baseline AI application security standards and secure implementation guidance.
  • Prioritize top AI security control gaps and recommend near-term remediation or guardrail opportunities.
  • Begin assessment of one or more high-impact AI initiatives or platforms.

90 Days

  • Design, develop, and deliver a pilot agentic workflow that automates one high-value AppSec use case end-to-end, such as vulnerability triage, secure coding guidance, or remediation task generation, with human approval built into the process.
  • Integrate security controls or checkpoints into at least two AI/ML or LLM delivery workflows.
  • Deliver architecture recommendations and risk treatment plans for the highest-priority AI initiatives.
  • Stand up or improve repeatable processes for AI security review, control validation, and issue tracking.
  • Help define a roadmap for AI security maturity across the broader AppSec program.

Supervisor:

No

Our Lead Cybersecurity earns between $128,400-$192,600 USD Annual Not to mention all the other amazing rewards that working at AT&T offers. Individual starting salary within this range may depend on geography, experience, expertise, and education/training.  

Joining our team comes with amazing perks and benefits:

  • Medical/Dental/Vision coverage  
  • 401(k) plan  
  • Tuition reimbursement program  
  • Paid Time Off and Holidays (based on date of hire, at least 23 days of vacation each year and 9 company-designated holidays)  
  • Paid Parental Leave  
  • Paid Caregiver Leave  
  • Additional sick leave beyond what state and local law require may be available but is unprotected  
  • Adoption Reimbursement  
  • Disability Benefits (short term and long term)  
  • Life and Accidental Death Insurance  
  • Supplemental benefit programs: critical illness/accident hospital indemnity/group legal  
  • Employee Assistance Programs (EAP)  
  • Extensive employee wellness programs  
  • Employee discounts up to 50% off on eligible AT&T mobility plans and accessories,
  • AT&T internet (and fiber where available) and AT&T phone.

#LI-Onsite – Full-time office role-

Ready to join our team? Apply today

Weekly Hours:

40

Time Type:

Regular

Location:

Alpharetta, Georgia, Atlanta, Georgia, Bedminster, New Jersey, Bothell, Washington, Dallas, Texas, Middletown, New Jersey, USA:NC:Charlotte / Research Dr - Dat:9139 Research Dr

Salary Range:

$141,300.00 - $237,400.00

It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.

Job ID R-81508-2 Date posted 05/05/2026
Apply now

Benefits

Your needs? Met. Your wants? Considered. Take a look at our comprehensive benefits.

  • Paid Time Off
  • Tuition Assistance
  • Medical and dental plans
  • Discounts
  • Training & Development

Learn more about benefits

Our hiring process

Apply Now

Confirm your qualifications align with the job requirements and submit your application.

Assessments

You may be required to complete one or more assessments, depending on the role.

Interview

Get ready to put your best foot forward! More than one interview may be necessary.

Conditional Job Offer

We’ll reach out to discuss a conditional job offer and the next steps to joining the team.

Background Check

Timing is important – complete the necessary actions to proceed with onboarding.

Welcome to the Team!

Congratulations! It’s time to experience #LifeAtATT.

Check your email (and SPAM) throughout the process for important messages and next steps.

Join our talent network

Didn’t find what you were looking for here? Sign up for our job alerts and get the latest AT&T news.

Sign up for the talent network

Don't Miss Out

Join our Talent Network to be the first to know about new job openings, special announcements and behind-the-scenes information.

Skip, I’d rather go straight to the application

AT&T Info and Alerts. Max 12 messages/month Privacy Policy (opens in new window). You may opt-out at anytime by sending STOP to short code 20013. Msg & data rates may apply.

By submitting your information, you acknowledge that you have read our privacy policy (opens in new window) and consent to receive email communication from AT&T for our U.S. Talent Network.