Job offers
Development
AI developer
Universalist ML/AI researcher

Universalist ML/AI researcher

  • ~CZK 200K
  • Remote, On-site, Hybrid
  • Prague + 2 more
  • Part-time, Full-time

Machine Learning Researcher – AI Psychology and Agent Models

Introduction

ACS Research (Alignment of Complex Systems Research Group) is a research organization focused on the study of complex systems composed of humans and AI.

We are seeking an experienced ML Researcher to join our team investigating the "psychology" of LLM agents from an AI safety perspective. The ideal candidate must be capable of independently implementing training pipelines, post-training modifications, reinforcement learning (RL), conducting experiments, and evaluating results.

We offer a unique opportunity to work on cutting-edge research at the intersection of ML, psychology, economics, and cognitive sciences.

What the Job Looks Like

Your primary responsibility will be to bridge theoretical research with the empirical testing of LLM behaviors, utilizing both API access and open-weight models.

  • Typical Day: A combination of theory (staying current with the field, conceptual work, discussions with colleagues), designing experiments, and implementing them.

  • Responsibilities: Designing and implementing ML experiments, analyzing results, collaborating on reports and academic papers, and potentially presenting findings at major conferences (e.g., NeurIPS, ICML).

  • Team: You will work in a small, interdisciplinary team combining expertise from ML, psychology, economics, and philosophy.

  • Challenges: Working at the frontier of current knowledge, the necessity to quickly navigate different scientific disciplines, and connecting abstract concepts with concrete ML implementations.

Tools and Technologies

This position requires the ability to efficiently translate research ideas into code and analyze  datasets.

  • Programming: Primarily Python, Claude Code. Whatever works.

  • Machine Learning: A typical task might involve something like "running RL on Kimi K2 based on a signal generated by a different LLM accessed via an API."

  • Tools: Whatever is necessary to achieve the research goals.

  • Infrastructure: Cloud-based. We do not intend to operate our own hardware; we have a substantial budget for compute.

Work is organized with a strong emphasis on achieving research objectives and rapid iteration of ideas.

What We Offer

We are looking for a key member of a new team on a full-time basis. We have a preference for in-person collaboration.

  • Impact and Environment: A stimulating and intellectually challenging environment. Meaningful work on AI Safety research that may have a crucial impact on the future.

  • Locations: Prague, London, San Francisco.

  • Compensation: Approximately 200,000 – 500,000 CZK monthly (or equivalent in local currency), depending on skills, demonstrable experience, seniority, and expected contribution. Cooperation is possible via a standard employment contract or as a contractor.

  • Benefits: Work flexibility. Generally, if standard corporate perks like "in-office fitness centers" or "vacation allowances" are crucial decision factors for you, you are likely not the right candidate for this role.

Our Requirements

We are looking for competent individuals capable of leading the implementation of ML experiments involving LLMs end-to-end, translating theoretical designs into functional systems, and who are willing to travel.

Must-Haves:

  • Ability to independently design, implement, and evaluate complex ML experiments.

  • Fluent written and spoken English (the primary working language).

  • Willingness to work remotely and attend meetings with the US West Coast approximately 2 evenings per week.

  • Strong interest in AI Safety and Alignment and the ability to clearly communicate complex ideas.

  • Willingness to travel (typically 2-3 months per year).

Ideal Candidate Profile

  • Strong foundations in Machine Learning and proven experience with post-training SOTA LLMs

  • Exceptional general mathematical, analytical, and programming skills.

Conclusion and Next Steps

The hiring process is tailored for research positions. It typically involves an initial screening, a series of practical tasks focused on assessing your skills and ways of thinking, followed by interviews.

If you are interested in this research area, please send us the following materials (in English):

  1. Your CV or a link to your LinkedIn profile.

  2. A brief cover letter explaining your interest in this specific area of AI Safety and why you are a good fit for the role (max 2 pages).

  3. Links to relevant projects, publications, or code samples (e.g., Google Scholar, GitHub).