About the Company:
World is a network of real humans, built on privacy-preserving proof-of-human technology, and powered by a globally inclusive financial network that enables the free flow of digital assets for all. It is built to connect, empower, and be owned by everyone.
This opportunity would be with Tools for Humanity
About the AI & Biometrics Team:
The AI & Biometrics team is building a biometric recognition system that can work reliably with more than a billion users and enables them to claim their free share of WLD. We use cutting-edge machine learning models deployed on custom hardware to enable high-quality image acquisition, identification, and fraud prevention, all while requiring minimal user interaction.
We are building a biometric recognition and fraud detection engine that works on the 1bn people scale. Therefore, its performance needs to out-perform all the current recognition technologies. We leverage our powerful custom-made iris recognition and presentation attack detection device, the Orb, combined with the latest research from the field of AI and Deep Learning.
About the Opportunity:
You will join a high-impact team that maintains and evolves the data platform powering our AI pipelines. This is an all-rounder role that combines backend development, data engineering, infrastructure, and lightweight frontend work.
Your work will span the ingestion layer, transformation workflows, and the warehouse itself: designing resilient pipelines, building secure APIs, and creating services that make our datasets reliable, discoverable, and ready for large-scale training.
You will be a key contributor to the infrastructure that feeds and monitors our machine learning models in production: ensuring that data flows seamlessly, services run reliably, and governance standards are never compromised. Every solution you build will follow the highest security standards and rigorous data governance principles, ensuring sensitive biometric data is handled with absolute care.
This role is onsite 5 days/week and sits in our Munich or San Francisco office
Key Responsibilities:
Design and maintain ingestion pipelines that move data from edge devices and internal services into the data platform with traceability, versioning, and high reliability
Develop and refine transformation processes to deliver clean, well‑structured tables ready for analytics, model training, and evaluation workflows - production-grade datasets ready for AI training and evaluation workflows, with strong schema contracts and lineage guarantees
Build internal APIs and backend services that provide secure, performant access to large datasets while upholding strict governance and privacy controls
Instrument systems with metrics, automated checks, and recovery mechanisms that detect issues early and enable self‑healing responses
Contribute to MLOps tooling for dataset monitoring and model training pipelines, ensuring smooth iteration cycles for research teams
Raise engineering standards by improving CI/CD pipelines, integration tests, and dependency management
Build lightweight dashboards (Streamlit/Next.js) to make datasets and metrics accessible internally
Design optimized and scalable, fault-tolerant real-time or near real time data pipelines using distributed processing tools.
Own the lifecycle of critical data assets — including lineage tracking, access control, and schema enforcement
You will work with both structured and semi-structured data, combining SQL-based platforms like Snowflake with NoSQL sources like MongoDB. You’ll build resilient pipelines that handle versioning, schema evolution, and are GDPR compliant.
About You:
4 - 6 years proficiency in both Python and Go, with experience building production services
Comfortable with containerization and orchestration tools like Docker and Kubernetes
Experienced with AWS services (S3, KMS, IAM) and Terraform for infrastructure as code
Skilled in designing and operating data ingestion and transformation workflows, with exposure to Snowflake or other SQL‑based analytics platforms
Familiar with CI/CD pipelines and version control practices, ideally using GitHub Actions or similar tools
Committed to building systems that are secure, observable, and follow strong data governance principles
Able to contribute lightweight internal dashboards using frameworks like Streamlit or Next.js
Obsessed with reliability, observability, and data governance — you care deeply about logs, metrics, and traceability
Strong fundamentals in data modeling, schema design, and backward-compatible schema evolution
Comfortable working with NoSQL systems like MongoDB, especially for building ingestion frameworks, managing schema evolution, or integrating Change Streams into ETL pipelines
Nice to have:
Experience with event‑driven data pipelines using SQS, SNS, Lambda, or Step Functions
Knowledge of data partitioning strategies, schema evolution, and large‑scale dataset optimization for analytics and ML
Familiarity with metadata management, dataset versioning, and lineage tracking in production environments
Exposure to monitoring and alerting stacks such as Datadog or Prometheus
Proficiency in Rust or an interest in learning it
What we offer:
The reasonably estimated salary for this role at Tools for Humanity in our Munich office ranges from €125,000 to €153,000. plus a competitive long-term incentive package. Actual compensation is based on factors such as the candidate's skills, qualifications, and experience. In addition, Tools for Humanity offers a wide range of best-in-class, comprehensive, and inclusive employee benefits for this role, including healthcare, dental, vision, 401(k) plan and match, life insurance, flexible time off, commuter benefits, and much more.
By submitting your application, you consent to the processing and internal sharing of your CV within the company, in compliance with the GDPR.
If you don't think you meet all of the criteria but are still interested in the job, please apply. Nobody checks every box, and we're looking for someone excited to join the team.