💼 Experience
- Deploy containerized applications to EKS clusters using Kubernetes manifests, Helm charts, or CI/CD pipelines.
- Evaluate AWS services and features to meet project requirements, configure them using Terraform, and integrate them into the overall cloud architecture.
- Develop high-level cloud architecture designs and detailed infrastructure diagrams using AWS services, and implement them using Terraform.
- Create reusable Terraform modules to encapsulate infrastructure patterns and best practices for AWS resources
- Provide technical leadership and guidance to a team of DevOps engineers, overseeing their work, providing mentorship, and fostering a collaborative and high-performing team environment
- Design, implement, and maintain automated build, deployment, and configuration management systems
- Dockerize applications to create lightweight, portable, and scalable containers that can run consistently across different environments.
- Implement and maintain monitoring and logging solutions
- Evaluate cloud service providers (AWS, Azure, Google Cloud) and select appropriate services and features to meet specific project needs.
- Deployment of iOS and Android apps via Azure DevOps pipelines
- Monitors, maintains, and resolves software, hardware and network issues.
- Develops automated processes for software and system administration.
- Leveraged AWS ECS features to auto scale docker applications based on CPU metrics.
- Troubleshooting issues on user workstations and network.
🚀 Projects
Bilingual Portfolio in Python (FastAPI · Docker · Terraform · AWS · Caddy TLS · SQLite · GitHub Actions)
Overview
This portfolio is a production‑ready, bilingual site built with FastAPI and served over HTTPS on AWS. The goal was to keep the app simple, secure, and fully automated from infra to deployment.
User‑facing features
- Bilingual routing (EN/ES): landing at
/
and CV at/cv
- Rich project pages with Markdown and code blocks (this page included)
- Collapsible project cards, clean centered layout, and enable/disable UI buttons via config
- Certifications section with each item linking to its Credly page (name and badge clickable)
- Freelance page at
/freelance
with calendar/hour selection and rate; “Book now” pre‑fills the contact flow
Scheduling, availability and bookings
- Weekly schedule enforced on client and server:
- Mon–Thu 18–23, Fri 15–23, Sat/Sun 0–23 (configurable in YAML)
- Past dates/hours are disabled (client) and filtered server‑side
- Blocked dates configurable in YAML and enforced end‑to‑end
- Double‑booking prevention with a
slots
table (unique date+hour) - Atomic booking creation: insert booking + slots within a transaction; on conflict returns 409
Admin panel (/admin)
- Basic Auth protected (ADMINUSER/ADMINPASS via environment variables)
- Bookings:
- List/filter/search bookings; update status (new/contacted/confirmed/cancelled)
- Export CSV; per‑booking ICS export (calendar)
- Config editor (/admin/config):
- Sections visibility toggles
- Hero buttons: enable/disable and URL per button (writes to the correct YAML source)
- Certifications: enable/disable and Credly URL per item
- Diagnostics panel: shows detected config files, hero button source file per key, certifications source, and build metadata (image, SHA, time)
Configuration model (YAML, last‑wins merge)
- The app merges
python/config.yaml
with files inpython/config.d/*.yaml
- Keys:
sections.*
— toggles to show/hide pieces of the UIhero_buttons.*
— enable/URL/labels per button (contact/freelance/github/linkedin/certifications/CV)certifications[]
— list withname
,badge
,url
,enabled
, andorder
freelance.*
— rate, schedule, and blocked dates- Admin config writes back to the originating YAML (tracked per key), so edits persist and are source‑of‑truth
Data persistence
- SQLite database with two tables:
bookings
— core booking metadata + serialized selectionsslots
— one row per reserved (date, hour), primary key (date,hour)- Writable data directory is auto‑resolved; in production the app uses
/data
mounted from the host
Architecture
- App: FastAPI + Uvicorn (listens on port 8000 inside the container)
- Reverse proxy: Caddy publishes 80/443, terminates TLS (Let’s Encrypt), redirects
www
→ apex, adds HSTS, and proxies to the app - Containerization: one image for the app; Caddy runs as a separate container in the same Docker network
Infrastructure (Terraform)
- EC2 (Amazon Linux 2023) in default VPC
- Elastic IP associated to the instance
- Security group allowing 22/80/443
- Route 53 hosted zone for
adrianmagarola.click
: - A record (apex) → Elastic IP
www
CNAME → apex- Remote state backend: S3 (+ DynamoDB lock)
Terraform state (sketch)
terraform {
backend "s3" {}
}
resource "aws_instance" "web" { /* Amazon Linux 2023, key pair, SG */ }
resource "aws_eip" "web" { instance = aws_instance.web.id }
resource "aws_route53_zone" "primary" { name = var.domain_name }
resource "aws_route53_record" "root_a" { /* apex A → EIP */ }
resource "aws_route53_record" "www" { /* CNAME www → apex */ }
CI/CD (GitHub Actions)
- Provision EC2 (Terraform): initializes backend and applies infra
- Build and Deploy to EC2 (Docker): builds/pushes image and deploys via SSH
- Prunes containers/images/build cache to free space on small instances
- Sends the Caddyfile base64‑encoded to avoid SSH wrapper injections
- Mounts volumes on EC2 host:
/opt/portfolio/data
→ container/data
withPORTFOLIODATADIR=/data
(SQLite persistence)/opt/portfolio/config.d
→ container/app/python/config.d
(config persistence)- Seeds missing config files from the image; force‑syncs
50‑certifications.yaml
(with backup) to align URLs/structure - Exposes build metadata to the app:
APPBUILDSHA
,APPBUILDIMAGE
,APPBUILDTIME
(visible in admin Diagnostics)
Caddyfile (essentials)
{
email admin@adrianmagarola.click
}
www.adrianmagarola.click {
redir https://adrianmagarola.click{uri} 301
}
adrianmagarola.click {
encode gzip
header {
Strict-Transport-Security "max-age=31536000; includeSubDomains; preload"
}
reverse_proxy portfolio:8000
}
Security and operability
- TLS with automatic certificates (Let’s Encrypt)
- HSTS and
www
→ apex redirect - Admin protected by Basic Auth (set
ADMINUSER
/ADMINPASS
) - App runs unprivileged; Caddy owns ports 80/443
- Disk housekeeping for predictable deploys
Endpoints (selected)
GET /
— landing (EN/ES)GET /cv
— CV page; certifications column with Credly linksGET /freelance
— booking UIPOST /api/bookings
— atomic booking (enforces schedule, blocked dates, past slots)
Result
- Live: https://adrianmagarola.click
- Repo: https://github.com/amagarola/portfolio
Supermarket Ticket Tracker (Flask · Gunicorn · OCR · PDF Parsing · Multi-Store · Docker · AWS · Caddy)
Overview
A production-ready web application to track and analyze supermarket purchases from PDF tickets and mobile photos. Built with Flask and deployed on AWS EC2 with automated CI/CD.
Key Features
Multi-Supermarket Support
- Mercadona, DIA, and Alcampo parsers with intelligent OCR
- Automatic store detection from ticket format
- Manual store selection override when needed
- OCR error tolerance for common scanning artifacts
Smart Ticket Processing
- PDF and photo upload support with Tesseract OCR
- Mobile camera integration for instant ticket capture
- Automatic product extraction with price, quantity, and category
- Invoice number tracking with deduplication to prevent re-uploads
Product Management
- Automatic categorization of products
- Manual product addition with store selection
- Product list with filtering by date range, category, and store
- Spending analytics and product history
Data Organization
- Per-user data isolation with Flask-Login authentication
- Invoice tracking to prevent duplicate uploads
- Product aggregation across multiple tickets
- Date-based filtering and analytics
Technical Stack
Backend
- Flask web framework with Gunicorn WSGI server
- OCR: Tesseract + pytesseract for text extraction
- PDF Processing: pdfplumber and pdf2image for ticket parsing
- Image Processing: Pillow (PIL) for photo manipulation
- Authentication: Flask-Login with bcrypt password hashing
Parser Architecture
- Modular parser system with base class inheritance
- Store-specific parsers (Mercadona, DIA, Alcampo)
- OCR error tolerance patterns (handles spaces, special chars in prices)
- Detailed logging for debugging and improvement
Deployment
- Production Server: Gunicorn with 4 workers, 120s timeout
- Containerization: Docker with multi-stage build
- Reverse Proxy: Caddy for HTTPS termination and automatic TLS
- Cloud: AWS EC2 with automated deployment via GitHub Actions
- CI/CD: Automated build, test, and deploy pipeline
Production Features
Performance
- Gunicorn multi-worker configuration for concurrent requests
- Docker image optimization with layer caching
- Static file serving through Caddy reverse proxy
Security
- User authentication with session management
- Password hashing with bcrypt
- Data isolation per user account
- HTTPS only with automatic certificate renewal
Data Management
- JSON-based data storage (gitignored for privacy)
- Per-user products, tickets, and invoices
- Reset functionality for testing phase
- Invoice deduplication on ticket deletion
OCR Processing
Challenges Solved
- OCR artifacts: commas → spaces/+/% in prices
- Missing spaces before tax letters (A/B)
- Inconsistent product line formats across stores
- Date and invoice number extraction from various formats
Parser Improvements
- Multiple price pattern matching (regex with alternatives)
- Tax letter detection with flexible spacing
- Product line validation with detailed logging
- Store-specific parsing rules and patterns
Mobile-First Design
- Responsive layout for mobile ticket capture
- Camera API integration for instant photo upload
- Photo preview before submission
- Batch photo upload support
Demo
Terraform Yaml-Config
Terraform Infrastructure with YAML Configs
📌 Overview
This project provides a centralized configuration module for managing multi-environment infrastructure using YAML templates.
Instead of hardcoding parameters across multiple Terraform workloads, all configurations are defined in a single yaml-config module that other workload modules can reference.
Key Concept: The system uses a hierarchical merge strategy where base configurations are defined once, and each environment (dev
, qa
, pre
, prod
) only overrides what's different. This eliminates duplication and keeps infrastructure definitions DRY.
---
🗂️ Project Structure
terraform/
├── yaml-config/ # Central configuration module
│ ├── main.tf # Module entry point
│ ├── outputs.tf # Exposes hierarchical configs
│ ├── variables.tf # Module inputs
│ ├── modules/
│ │ └── yaml-config/ # Core merge logic
│ │ ├── main.tf
│ │ └── variables.tf
│ └── config/
│ └── env/
│ ├── ec2.yaml # Base EC2 config
│ ├── storage.yaml # Base S3 config
│ ├── database.yaml # Base RDS config
│ ├── network.yaml # Base VPC config
│ ├── security.yaml # Base security groups
│ ├── eks.yaml # Base EKS config
│ ├── dev/ # Dev overrides
│ ├── qa/ # QA overrides
│ ├── pre/ # Pre-prod overrides
│ └── prod/ # Production overrides
│
├── workload-ec2/ # EC2 workload module
├── workload-storage/ # S3 workload module
├── workload-database/ # RDS workload module
├── workload-network/ # VPC/networking module
├── workload-security/ # Security Groups module
└── workload-eks/ # EKS cluster module
---
⚙️ How It Works
1. YAML Structure
Each YAML file follows this structure:
context:
environment: "${env}"
terraform:
vars:
storage: # Resource type
s3_buckets: # Resource category
first-s3: # Resource key
name: "first-s3-${env}" # Properties
versioning: true
encryption: true
2. Hierarchical Merging
- Base config (
config/env/*.yaml
) applies to all environments - Environment overrides (
config/env/{env}/*.yaml
) merge on top - yaml-config module uses
templatefile()
to interpolate variables - Result: Clean, merged configuration per environment
3. Data Flow
Base YAML + Env YAML → yaml-config → Merged Config → Workload Modules → AWS Resources
Example:
storage.yaml + qa/storage.yaml → merge → module.yaml_config.env.qa.storage → for_each → S3 buckets
---
🚀 Usage Example
In a Workload Module
Step 1: Reference yaml-config
module "yaml_config" {
source = "../yaml-config"
environment = var.environment
}
Step 2: Access configurations
locals {
s3_buckets = try(
module.yaml_config.env[var.environment].storage.s3_buckets,
{}
)
}
Step 3: Create resources dynamically
resource "aws_s3_bucket" "this" {
for_each = local.s3_buckets
bucket = each.value.name
# ... other properties from YAML
}
Adding New Resources
No Terraform code changes required!
---
- Edit
config/env/storage.yaml
(base definition) - Override in
config/env/qa/storage.yaml
if needed - Run
terraform apply -var="environment=qa"
- Resources auto-created via
for_each
✨
✅ Benefits
- DRY Principle → Define once, override only differences
- Centralized → Single source of truth
- Dynamic → Auto-scaling with
for_each
- Type-safe → Structured YAML schema
- Flexible → Template interpolation (
${env}
) - Clean separation → Config vs. logic
- Easy auditing → Clear git diffs per environment