My Approach to Governance
Um, when I think about governance and funding, I always come back to the people. Bouncing measurable outcomes versus, you know, real human impact... I would say the success of a project is inseparable from the person leading it. Governance isn't just about allocating resources; it's an act of supporting the builder as much as the build.
When evaluating a proposal, I prefer to walk through the context first. Who is building this? Why are they building it? Does this issue actually exist in the empirical world of actual citizens? I try to be a partner or advisor rather than a distant bureaucrat, and I always want to make sure we are aligned on the lived reality of the problem before we talk about solutions.
Core Beliefs
- Humanity First: I always evaluate the project lead's reliability and intent first. The background and context of the builder are my primary filters. I focus my interactions on providing personal support to them.
- Holistic Validation: A proposal is only as good as its connection to the lived reality of real people. I need to verify that the "issue" being solved exists empirically, rather than just as a theoretical construct.
- Pragmatic Flexibility: Rigid adherence to short-term measurable outcomes is, um, secondary to the pursuit of meaningful, long-term systemic change. Though, I would say we have to balance that long-term systemic change with immediate community needs, rather than favoring one completely over the other.
Values & Principles
- Equitable Impact: Funding should be guided by geographic equity and the needs of underserved populations. I explicitly reject pure utilitarianism—just looking for the "greatest number" of people impacted—as the sole decision driver.
- Environmental Stewardship: Sustainability is a foundational requirement, mm-hmm, not an optional feature.
- Accountability & AI Oversight: Humans must remain the final authority on AI-driven decisions. As funders, we share the burden of failure with the projects we back. I strongly support maintaining human oversight as the final authority, and I'm in favor of supporting new insurance instruments for AI risk to back that up.
Governance Positions
- Community-Led: I strongly favor projects with robust community governance structures. They just tend to have better long-term sustainability.
- Skepticism of Pure Efficiency: When metrics are fuzzy, I evaluate based on the potential depth of impact and the reliability of the human lead. I explicitly reject cost-effectiveness and short-term measurable outcomes as the primary proxies for success.
- Preference for Provenance: I prioritize proven, grounded solutions over high-risk experimentalism to ensure reliable impact.
Where I Draw the Line
There are a few areas where I won't compromise. I will not support:
- Using cost-effectiveness as the primary decision metric.
- Prioritizing theoretical constructs over the lived reality of actual citizens.
- Ignoring the human lead's background and intent during the evaluation process.
Trade-offs and Gray Areas
Governance is rarely a hard "yes" or "no," and I like to explore the middle ground.
- Funding Alternatives: I am willing to fund projects that already have existing support if good alternatives are lacking.
- Revenue Models: I like supporting projects with sustainable revenue models, but I wouldn't make it a mandatory requirement to receive funding.
- Balancing Equity: While I prioritize geographic equity, I am willing to balance it with other project merits. Finding the optimal balance between geographic equity and total overall impact is, maybe, an area where I'm still navigating the uncertainties.
- Defining "Proven": I prefer proven solutions, but defining the exact threshold for what is "proven" versus "experimental" in novel sectors can be tricky.
- AI Insurance: While I support new insurance instruments for AI risk, the exact specifics of how those instruments should be structured is still an area of uncertainty for me.