Professional Experience
Azure Data Engineer
British Telecommunications PLC (BT)
Mar 2024 – Feb 2026
Remote · UK
Embedded within BT’s Data & Analytics division as the primary data engineering decision-maker on a platform processing 30M+ network events daily, serving multiple UK analytics teams. Fixed-term contract, concluded as scheduled.
- Architected end-to-end migration of high-volume network telemetry data from on-premise Hadoop/Hive clusters to a Microsoft Fabric Medallion Lakehouse (Bronze/Silver/Gold); improved downstream data accessibility by 40% and reduced storage overhead via Delta Lake columnar compression.
- Replaced ~50 legacy cron jobs with modular Apache Airflow DAGs containerised via Docker; pipeline failure rate dropped 25% and on-call incidents reduced ~30% in the first quarter post-launch.
- Engineered a Kafka real-time streaming pipeline for live network event ingestion — producers, consumers, dead-letter queues; reporting latency reduced from 4+ hours to under 5 seconds, enabling real-time SLA monitoring for the network operations centre.
- Designed a cross-cloud data sync layer between AWS S3 and Azure Synapse Analytics serving distributed regional teams, with full audit trails under a GDPR-compliant governance framework.
- Documented architecture decisions (ADRs) and presented technical trade-offs at quarterly engineering reviews attended by senior engineering leads and product managers.
Azure Data Engineer
Intec Select Ltd
Jul 2021 – Sep 2022
UK
Contract engagement — sole architect and lead engineer delivering a greenfield Azure Lakehouse and automated data ingestion platform for a UK technology staffing firm.
- Designed and built a production Azure Medallion Lakehouse (ADLS Gen2 + Azure Synapse) serving GDPR-compliant analytics across European logistics and recruitment data; delivered on schedule and under budget.
- Built a multi-threaded Python ingestion framework containerised with Docker, automating data collection from 500+ global RESTful APIs; eliminated 15+ hours of manual weekly processing and reduced ingestion errors ~60% vs. the previous manual approach.
- Applied Spark SQL partitioning, bucketing, and predicate push-down on multi-TB datasets; reduced query processing costs by 22% and improved average analyst query response from ~4 minutes to under 45 seconds.
- Delivered SQL and Python analysis of UK tech-market recruitment trends; findings presented to C-suite and informed headcount planning for the following two financial quarters.
Data & Integration Engineer
Checkpoint srl
Jun 2020 – May 2021
Asola, Italy
Built automated data pipelines and integration layer for a high-volume e-commerce operation managing 500+ vendor catalogues.
- Engineered ETL workflows connecting Shopify, payment gateways, and logistics APIs to a centralised data store; automated ingestion across 500+ product catalogues, cutting manual data entry by 40% and improving transaction accuracy by 25%.
- Built Power BI dashboards delivering real-time visibility into sales, inventory, and customer behaviour; reduced weekly reporting preparation by 35%.
- Improved core database performance by 30% through schema redesign, index optimisation, and relational normalisation.