🥷 Looking for QA Engineer AI-Native Testing for Meiro CDP
Hello, we are Meiro!
Welcome to Meiro – where data meets human connection! Founded in Singapore, we’ve expanded into a global family throughout Southeast Asia, Australia, New Zealand, Europe, and the Middle East. Our mission is to help businesses leverage customer data securely and seamlessly.
In today’s AI world, we focus on helping brands forge genuine connections with their customers through personalized experiences, while prioritizing data security and privacy.
Our diverse team of over 50 professionals spans the globe, from our Singapore headquarters to tech hubs in Brno and Prague. Over the past five years, we’ve partnered with top clients, including leading banks, retail chains, e-commerce platforms, and innovative travel companies.
Our devs ship features in hours using AI tools. Traditional QA can't keep up. Write Cypress tests for every edge case? The feature's already evolved twice. Sprint-based test planning? We've deployed three times before the test plan meeting. Manual test case documentation? By the time you finish, the implementation changed.
We need someone who tells AI what to test, not someone who writes test syntax. You'll instruct agents to cover functionality, validate their work, and catch what they miss. We're discovering what QA looks like when testing moves at development speed.
💡 What We Know
- Testing happens as features emerge. Features deploy daily (sometimes hourly). No test cycles, no sprint-based planning. Validation is continuous.
- Syntax is commodity, judgment isn't. AI can write Playwright scripts faster than you. Your value 👉knowing what matters, recognizing what AI missed, understanding when "tests pass" doesn't mean "feature works."
- Coverage is infinite, attention is finite. AI can generate unlimited test scenarios. The skill is knowing what actually matters - which edge cases break users, which security vectors are real threats, which test failures signal actual problems.
- Some things need human judgment. Authorization logic, cross-feature interactions, UX edge cases - AI tools consistently miss certain patterns. You'll build your mental model of what to always check manually.
🤔 What We're Still Figuring Out
- How much test coverage is actually useful versus just noise.
- Which security/edge cases AI tools consistently miss
- When to manually verify versus trust AI-generated validation
How to maintain test quality when everything moves this fast
You'll help us figure this out.
☀️ What you'll actually do
- Instruct AI to generate test coverage. Dev ships file upload feature → you write: "Cover: max size limits, invalid file types, concurrent uploads, malformed headers, storage quota checks" → validate AI generates real coverage (not just happy path) → identify gaps → iterate.
- Validate in real-time with developers. Dev is implementing auth changes right now. You're in Slack/Linear identifying test scenarios as they code. "Have we tested token refresh during an active session?" → instruct AI to generate a test → verify it works → document edge case found.
- Manual testing for critical paths. Payment flows, security boundaries, cross-feature interactions - things where you trust your judgment over AI-generated validation. You decide what needs human eyes.
- Pattern recognition and documentation. Notice AI consistently misses authorization checks on DELETE endpoints? Document it in Linear. Found that file upload tests never check MIME type validation? Add to your checklist of "what I always verify manually."
- Triage what matters. Ten tests failed. Two are real issues, eight are flaky or environment problems. You distinguish signal from noise in minutes, not hours.
🎯 What you need
Must have:
- Comfort instructing AI to generate test coverage (ChatGPT, Claude, Cursor)
- Can articulate "what could break" faster than you could write tests
- Know when to manually verify versus trust automated coverage
- Response time in minutes (async-first but available for quick calls)
- Browser dev tools proficiency
Strong plus:
- Experience with Postman/API testing (you'll instruct AI to generate these)
- Understanding of security testing (XSS, SQL injection, auth flows)
- Any test automation background (you'll recognize bad AI-generated tests)
- Worked in environments where testing couldn't wait for sprints

🔴 Why this might not work for you
- Love comprehensive test plans and controlled release cycles? This will frustrate you. Features ship hourly. You'll be validating as things emerge, instructing AI to catch up with development speed.
- Need strict test coverage metrics and detailed documentation? We care about "does it break" not "did we hit 80% coverage." The work is continuous iteration.
🟢 Why this might be perfect for you
- You're faster at describing edge cases than writing test code. You love the idea of AI handling syntax while you focus on "what matters." You can operate at development speed - validating features as they ship, not weeks later.
And you're comfortable with honest uncertainty: we're inventing QA for AI-speed development.
🚀 What's in store for you?
- Team of ~20 engineers shipping with AI tools
- Testing speed: Hours (features → tests → validation)
- Location: Remote-first, ideally near Prague/ Brno timezone
- Reporting to our CTO Adam
- Be part of a global startup. The opportunity to become a key player in our team, shaping internal projects for companies in APAC and EMEA;
- Although we mostly work remotely, we love getting together from time to time - for team on-sites, team lunches, or just to catch up outside of work;
- Enjoy a flexible work environment, with the hybrid option to use our offices in Prague or Brno;
- Education budget for personal development.
- Good financial compensation based on your skills and experience.
📌 To Apply
Don't send a traditional application. Show us:
An AI-instructed test scenario: Show how you'd tell AI to test something complex (auth flow, file uploads, etc.). What coverage would you demand? What would you check manually?
A coverage gap you'd catch: Give example of what AI-generated tests typically miss that you'd spot.
A speed decision: Feature shipping in 2 hours. How do you decide what to test versus what to skip?
We'll know in 10 minutes if you get this.
