We are
At Cross River, we're building the financial infrastructure that powers global innovation. With our cutting-edge suite of embedded payments, cards, and lending solutions, we enable millions of businesses and consumers to transact seamlessly and securely.
With 900+ employees worldwide and an R&D center of over 160 employees in Jerusalem - we’re reshaping how financial technology is developed and delivered. .
As AI capabilities accelerate across the bank, we need an engineer to design and enforce safe AI usage—protecting customer data, preserving model integrity, and meeting our regulatory obligations. You'll be the architect of guardrails, tooling, and policies that make AI both secure and useful for product and internal teams. This isn't about slowing things down; it's about building the trust layer that lets innovation move fast without breaking things.
You're a security engineer who's excited about the AI wave—someone who sees GenAI and LLMs as fascinating puzzles to secure, not just threats to mitigate. You've spent 5+ years in Security Engineering, AppSec, or Cloud Security, and at least 1–2 of those years have been spent getting your hands dirty with AI/ML or data-intensive systems.
You're equally comfortable dissecting a prompt injection attack as you are writing a Terraform module or shipping a Python library. You know your way around AWS and/or Azure, modern app stacks (Python/TypeScript, REST/gRPC, containers/Kubernetes), and can translate security requirements into developer-friendly tooling—not just PDF policies that gather dust.
You communicate clearly in English and Hebrew, thrive in regulated environments, and understand that security in financial services means mapping controls to frameworks like FFIEC, SOC 2, and PCI DSS—and actually having the evidence to prove it.
What You Bring to the Table
Nice to have
Hit Apply!