Join WorldMind

WorldMind is not a conventional AI startup, and it is not a conventional research lab.

We are building a project that begins from the conviction that artificial intelligence must be understood before it can be built responsibly. That conviction shapes not only our research direction, but how we work, what we value, and who belongs here.

This page is for those who feel that conviction resonate.

What it means to join

Joining WorldMind does not mean stepping into a predefined role with fixed deliverables and near-term milestones. It means contributing to a long-horizon effort to rethink what artificial intelligence is and what it could become.

The work involves engaging seriously with foundational questions about intelligence, world, and responsibility, translating conceptual clarity into system design and architectural experiments, resisting the pressure to optimize prematurely for scale, speed, or spectacle, and building systems that are governed internally rather than controlled externally.

This is slower work than most AI development today. It is also more demanding.

Who we are looking for

WorldMind welcomes interest from people across disciplines, but not all interests are aligned.

We are especially interested in people who are dissatisfied with prevailing AI paradigms but unwilling to dismiss engineering altogether, can hold philosophical rigor and technical experimentation in the same frame, are comfortable working without hype, buzzwords, or guaranteed timelines, and care about responsibility, governance, and meaning as design constraints rather than afterthoughts.

Backgrounds may include machine learning and AI engineering, systems architecture and software engineering, philosophy, phenomenology, or cognitive science, governance, ethics, and institutional design, and interdisciplinary research bridging theory and implementation.

Credentials matter far less than how you think.

Who this is not for

WorldMind is probably not the right place if you are primarily interested in rapid product launches or short-term exits, incremental improvements to existing large language models, optimizing benchmarks without questioning what they measure, or building systems whose primary value is engagement or automation at scale.

How we work

WorldMind values clarity over cleverness, responsibility over acceleration, and durability over dominance.

We expect disagreement, debate, and revision. We do not expect conformity. At the same time, we take our commitments seriously, and we expect collaborators to do the same.

This is a project where thinking slowly is not a liability.

Forms of involvement

At this stage, involvement may take different forms: research collaboration, architectural prototyping, writing and conceptual development, exploratory engineering work, or longer-term roles as the project becomes more embodied.

Specific roles will emerge as the work does. We are not filling headcount. We are building a trajectory.

Getting in touch

If WorldMind resonates with you, the best way to begin is not with a résumé, but with a brief account of why this project matters to you and how you imagine contributing to it.

We are especially interested in hearing from people who can articulate what they find unsatisfying about current AI approaches, what they think is missing, and why they are willing to work on a problem whose value cannot be reduced to near-term metrics.

Conversation precedes application here.

Closing

WorldMind is a commitment, not an opportunity in the conventional sense.

If you are looking for a place to help rethink artificial intelligence at its foundations and to participate in building something that does not yet exist because it cannot yet exist, we invite you to reach out.