About the Company:
World is a network of real humans, built on privacy-preserving proof-of-human technology, and powered by a globally inclusive financial network that enables the free flow of digital assets for all. It is built to connect, empower, and be owned by everyone.
This opportunity would be with Tools for Humanity
About the AI & Biometrics Team:
The AI & Biometrics team is building a biometric recognition system that can work reliably with more than a billion users and enables them to claim their free share of WLD. We use cutting-edge machine learning models deployed on custom hardware to enable high-quality image acquisition, identification, and fraud prevention, all while requiring minimal user interaction.
We are building a biometric recognition and fraud detection engine that works on the 1bn people scale. Therefore, its performance needs to out-perform all the current recognition technologies. We leverage our powerful custom-made iris recognition and presentation attack detection device, the Orb, combined with the latest research from the field of AI and Deep Learning.
About the Opportunity:
We are looking for an engineer to join our high-impact team responsible for maintaining and evolving the data platform that powers our AI pipelines. This is a role where we are looking for someone who is an end-to-end or an all-rounder engineer that spans backend development, data engineering, infrastructure, and lightweight frontend work, with responsibilities across the ingestion layer, transformation workflows, and the warehouse itself. The ideal candidate will design resilient pipelines, build secure APIs, and develop services that make our datasets reliable, discoverable, and ready for large-scale training. You will play a key role in the infrastructure that feeds and monitors our production machine learning models, ensuring that data flows seamlessly, services run reliably, and governance standards are upheld. Every solution you deliver will follow the highest security and compliance principles, ensuring that sensitive biometric data is protected with the utmost care.
This role is onsite and sits in our Munich office
Key Responsibilities:
Design and maintain ingestion pipelines that move data from edge devices and internal services into the data platform with traceability, versioning, and high reliability
Develop and refine transformation processes to deliver clean, well‑structured tables ready for analytics, model training, and evaluation workflows
Build internal APIs and backend services that provide secure, performant access to large datasets while upholding strict governance and privacy controls
Instrument systems with metrics, automated checks, and recovery mechanisms that detect issues early and enable self‑healing responses
Contribute to MLOps tooling for dataset monitoring and model training pipelines, ensuring smooth iteration cycles for research teams
Raise engineering standards by improving CI/CD pipelines, integration tests, and dependency management
Build lightweight dashboards (Streamlit/Next.js) to make datasets and metrics accessible internally
About You:
4 - 6 years proficiency in both Python and Go, with experience building production services
Comfortable with containerization and orchestration tools like Docker and Kubernetes
Experienced with AWS services (S3, KMS, IAM) and Terraform for infrastructure as code
Skilled in designing and operating data ingestion and transformation workflows, with exposure to Snowflake or other SQL‑based analytics platforms
Familiar with CI/CD pipelines and version control practices, ideally using GitHub Actions or similar tools
Committed to building systems that are secure, observable, and follow strong data governance principles
Able to contribute lightweight internal dashboards using frameworks like Streamlit or Next.js
About You:
Experience with event‑driven data pipelines using SQS, SNS, Lambda, or Step Functions
Knowledge of data partitioning strategies, schema evolution, and large‑scale dataset optimization for analytics and ML
Familiarity with metadata management, dataset versioning, and lineage tracking in production environments
Exposure to monitoring and alerting stacks such as Datadog or Prometheus
Proficiency in Rust or an interest in learning it
What we offer:
The reasonably estimated salary for this role at Tools for Humanity in our Munich office ranges from €125,000 to €153,000. plus a competitive long-term incentive package. Actual compensation is based on factors such as the candidate's skills, qualifications, and experience. In addition, Tools for Humanity offers a wide range of best-in-class, comprehensive, and inclusive employee benefits for this role, including healthcare, dental, vision, 401(k) plan and match, life insurance, flexible time off, commuter benefits, and much more.
By submitting your application, you consent to the processing and internal sharing of your CV within the company, in compliance with the GDPR.
If you don't think you meet all of the criteria but are still interested in the job, please apply. Nobody checks every box, and we're looking for someone excited to join the team.