Hi! 👋 I'm Cariad!
I'm a full stack engineer looking for contract / permanent roles.
I have commercial experience in startups and global multinationals, in financial and medical technologies, with Python, C# and TypeScript / Node.js. I'm passionate about test-driven development, DevOps cultures and security.
I'm an experienced AWS engineer, Meta-certified frontend developer and IBM-certified full stack + AI developer.
I live by the sea in Exmouth, UK, so I'm looking for in-office roles in the Exeter area or remote roles from anywhere.
Freyda is a Google-backed FinTech startup building machine learning and artificial intelligence pipelines to provide insight into unstructured financial data. The platform is a B2B SaaS solution deployed to Amazon Web Services.
My key responsibility during Freyda's start-up was to deploy the company's first CI/CD pipelines.
I used HashiCorp Packer to build a custom AMI containing Python, AI development tools and supporting Bash scripts, then deployed an EC2 Auto-Scaling Group of GitLab CI runners.
This enabled the startup development team to deploy their initial services within a matter of weeks.
I worked directly with the CTO to plan and deploy the platform's architecture in AWS. I designed and developed infrastructure-as-code with CloudFormation, Python and Bash for the entire set of infrastructure, from VPCs up to CloudFront Distributions.
I was continuously responsible for keeping architecture diagrams up-to-date and introducing new hires to our patterns, which helped developers to contribute with confidence as early as possible.
I have a passion for platform security, which led to me being trusted to work with third-party security auditors and penetration-testers. I took responsibility for reviewing their findings and leading the development of their suggestions, which greatly strengthened our security posture.
I also took responsibility for performing security reviews of the team's code, and identified problems such as security fix regressions and user privilege escalation before they were deployed, which certainly protected customer data and the company's reputation.
We required developers to use multi-factor authentication for AWS API calls, and enforced it via IAM Policies. While this certainly strengthened our security, it also caused frustration and put demands on developers' time when using the AWS CLI or SDK on local machines. To counter the distractions and save time, I took the personal initiative to create the open-source wev command-line tool to handle authentication on behalf of developers. The tool was adopted across the company, and saved development time every day.
My daily responsibility was to develop and deploy Python services. I enjoy test-driven development with high coverage, and leveraged static analysis and PEP8 compliance to support my peers with reliable and maintainable code.
I deployed my code in Docker images hosted in AWS as either Lambda Functions or Fargate Services, depending on the functional requirements. For Lambda Functions that were invoked via SQS messages published by Step Functions, I used my open-source lambdaq Python package to reduce boilerplate code. This saved me and the wider development team a significant amount of time during development and code review.
One key service I contributed to was the API backend, in which I:
I led the design and development of the platform's reporting framework, which allowed developers to create new customer-facing, Microsoft Excel-based reports using only YAML configuration files.
The framework was supported by my open-source recompose package to clean data and rolumns package to transform data into tables.
This new framework allowed new reports to be defined much quicker than the previous hard-coded solution, which significantly decreased the team's response time to customer requests and increased customer satisfaction.
I led the design and development of OAuth 2.0 / Single Sign-On authentication in the platform. My responsibilities included:
This release significantly increased the satisfaction of customers who wanted to manage group access via their own identity management, and helped to market the platform to larger businesses.
I led the design and development of a framework that allowed customers to import data from their own data stores.
Amongst custom integrations, this release included:
As well as importing data, these same integrations allowed reports to be generated and exported directly to customer-owned data stores. This release was key to attracting and retaining customers who were reluctant to manually duplicate their data across platforms.
Our microservices commonly needed to call the private APIs of the backend service, but this resulted in many inconsistent and fragile approaches to discovering the API function's ARN, constructing appropriate JSON payloads and understanding the response schema to expect.
To save development time and increase reliability, I took the initiative to build an SDK package that managed resource look-ups and provided strongly-typed functions for private APIs. This significantly reduced the bugs caused by failures to discover the private APIs and by failing to correctly parse requests and responses.
Likewise, I also developed a command-line interface that handled user authentication and provided headless access to the platform's public APIs. This was invaluable for deploying automated integration tests and removed the need for a lot of manual testing.
These tools used my open-source cline package to create command-line interfaces and asking package to interactively gather the user's configuration.
NHS Digital designs, develops and operates national IT and data services that support clinicians at work, helps patients get the best care, and uses data to improve treatment.
I joined NHS Digital for a 6-week contract to migrate a life-critical SOAP service from C# .NET Framework to the latest .NET Core release.
My key responsibilities were accuracy and reliability, since clinicians use the service 24/7/365 to identify patients who need life-saving care. To achieve this high quality, I took personal responsibility for identifying edge-cases and adding unit tests for them during the migration. I also added the service's first integration tests using Postman to add as much confidence as possible.
An additional complexity was the arrival of the COVID-19 crisis, which added unexpected demands to the wider NHS Digital organisation. The final few weeks of the migration demanded incredible focus and ability to work independently while personnel and responsibilities were shuffled around, and I remain incredibly proud of what I achieved.
My approach to testing and delivery was highlighted and celebrated during a leadership call, and I completed the migration on-time.
Thomson Reuters Tax & Accounting develops integrated software products for tax and accounting professionals.
I joined Digita Open Systems Ltd (which was later acquired by Thomson Reuters Tax & Accounting) as a junior software engineer to develop Windows desktop applications in Visual Basic before we migrated to C# .NET. I was responsible for design documentation, development, testing, and liaising with the QA, marketing and support teams.
I often joined customer support calls to offer help and gather feedback, and joined company roadshows to meet customers directly.
As a senior software engineer during the migration from desktop to web applications, I contributed to the planning of SaaS architecture in our private cloud environment, as well as the design, development, test and support of C#, JavaScript (via Express.js on Node.js) and Golang microservices on Windows and Linux with an Angular.js frontend.
When the organisation decided to migrate the SaaS platform from their private cloud into Amazon Web Services, I was promoted to Software Team Lead to form a DevOps team, and given joint responsibility with a Principal Engineer to have the platform deployed to production in AWS within a year.
My greatest challenge was learning how to use AWS, from the introductory basics through to best-practices for production. The organisation didn't have any internal guidance yet as public cloud adoption was still new, so I published my own findings on my company blog as-and-when I learned new tips or topics. I also published weekly update videos so the wider development team was kept informed of the migration's progress.
Unexpectedly, developers outside of my project were also interested in my articles and videos, which led to me being invited to travel and present in-person talks across the UK and USA. I've also been told, since I left the company, that my posts are greatly missed.
To achieve the migration, my technical responsibilities included the planning of AWS architecture, deployment of CI/CD pipelines in Jenkins, and development of infrastructure-as-code with TypeScript / Node.js, Python, Bash, PowerShell and Terraform.
The production deployment to AWS was achieved on-time, which made my team and I pioneers of public cloud deployment within Thomson Reuters.
Based on the success of our migration to AWS, I was invited to lead a global DevOps team tasked with developing disaster recovery strategies, security practices, and common tooling for teams across the business to adopt rather than build their own.
The team was spread across the UK, USA and India, and not only did we successfully support several projects into production, but some of our work was presented at re:Invent by corporate leadership.
I maintain, and have contributed to, many open-source projects on GitHub at @cariad. The following are a selection of my personal favourites and the most influential.
In 2021, I earned a Mars 2020 Contributor badge from GitHub and JPL for my contributions to boto/boto3, which was used during NASA's Ingenuity helicopter mission on Mars.
The badge was spotlighted by the GitHub ReadME Project.
During the Wordle craze of 2021-22, I released wa11y.co (source), short for "Wordle Accessibility", to translate the game's emoji patterns into screen reader-friendly text to share on social media.
The app translated over 30k games per month during its peak, and earned mentions on The Verge, Slate and GameRant.
wev (documentation), short for "with environment variables", is a cross-platform command-line application for running other command-line applications with environment variables resolved at runtime.
For example, the wev-awsmfa (demo) plugin allows the AWS CLI (or any other application) to be run with credentials generated by multi-factor authentication. So, to run aws s3 ls
with multi-factor authenticated credentials, you would run wev aws s3 ls
.
One critical plugin I developed for the development and data science teams at Freyda was wev-awscodeartifact (demo), which generates AWS CodeArtifact tokens for accessing private repositories. This saved developers time every day by allowing wev pipenv install
to work as seamlessly with the company's private repository as if using PyPI.
slash3 (documentation) is a Python package for building and navigating Amazon Web Services S3 paths, much like the built-in pathlib
supports building and navigating filesystem paths.
These strongly-typed S3 paths significantly reduce the risk of errors when building or reading paths, allow new keys to be quickly built on top of prefixes, and easily convert bucket / key strings to URIs and vice-versa.
I used slash3 at Freyda to develop safe and highly-tested interactions with S3.
greas3 (documentation) is a Python package and command-line application for uploading files to Amazon Web Services S3.
Unlike the AWS CLI, greas3 uses checksums rather than timestamps to avoid re-uploading files that are already present and up-to-date. This is vital in CI pipelines that upload files to S3 after pulling them from Git, since the local file's timestamp will be updated on every CI run. Naively trusting the timestamp in this case will result in significant wasted time and network traffic, which greas3 saves.
rolumns (documentation) is a Python package for manipulating data into tables.
For example, rolumns allows you to flatten hierarchical objects into rows, perform grouping of lists of objects, define inline value manipulations and add user-defined columns.
I used rolumns at Freyda to build a reporting framework that transforms deeply-hierarchical financial information into Microsoft Excel spreadsheets. I've also published some example usage with public APIs at cariad/rolumns-examples.
cline (documentation) is Python package for building command-line applications. It separates the concerns of understanding the command-line arguments received and operating on strongly-typed arguments, and helps developers to write readable and testable code.
I used cline to build command-line tools for Freyda.
asking (documentation) is a Python package and command-line application for gathering user input through question / answer sessions on the command-line.
Sessions are defined by schemas, which include the options for multiple choice or free text answers, regular expressions for acceptable answers, and branching depending on answers.
In the end, the user's responses are returned as a dictionary when used as a Python package, or as JSON when run as a command-line application.
Sessions can also be run non-interactively and are fully unit-testable.
I used asking to build command-line tools for Freyda.
lambdaq ("lambda queue") is a Python package that handles all the boilerplate for AWS Lambda functions to receive events from Step Functions state machines, either by SQS queues, by direct invocation, or to safely migrate from one to another.
I used lambdaq at Freyda to significantly reduce the amount of time I spent developing, and the amount of time my team spent reviewing, the same code across dozens of projects.
lackadaisical is a Python package for unit-testing performance; for example, to assert that a function completes within a specified number of seconds.
I used lackadaisical at Freyda to test for database performance regression.
s3headersetter is a Golang command-line application for setting the Cache-Control
and Content-Type
HTTP headers on AWS S3 objects according to their file types.
I used s3headersetter at Freyda to set appropriate headers on deployed frontend files.
This website is a React.js application built with TypeScript and Radix UI components, and the source is available at github.com/cariad/cariad.earth.
PRISM Exeter is a network for LGBTQ+ professionals and students working and studying in STEMM in and around Exeter, UK.
I was invited to pitch, and then host, a talk about Internet security for LGBTQ+ journalists and communities on untrustworthy ISPs or unsafe regions.
It was a genuine privilege to talk about how encryption works, and how Tor's onion routing can help to protect identities.
TechExeter is a community group for technologists in and around Exeter, UK. Sadly, TechExeter hosts members who promote violence against trans people, and so must not be considered a safe space for queer folks.
I hosted a talk during the TechExeter Conference 2019 about managing KMS customer keys in Amazon Web Services, some best practices for their IAM policies, and how to use those keys with Secrets Manager to manage credentials for RDS databases.
AWS User Groups are peer-to-peer communities that meet to share ideas, answer questions, and learn about new services and best practices.
I was invited by colleagues at Thomson Reuters across the UK to talk at their local AWS User Groups about the challenges and our strategies for migrating our SaaS platform from on-premises hardware into Amazon Web Services.
Out & Equal is a non-profit organisation working exclusively on LGBTQ+ workplace equity, inclusion and belonging.
I was one of the UK leads of Thomson Reuters' global "Pride at Work" LGBTQ+ network, and I was invited to join our global leadership at the Out & Equal Workplace Summit in Baltimore to share my part of that story.