Popularizer, if anything. But researcher too - unless you're willing to restrict that term to people only working within the structures of academia or corporate R&D labs.
And I'd definitely say he's a legitimate leader in the field; more than that, I'm inclined to agree with Time Magazine on this - in the byline, Eliezer is said to be "widely regarded as a founder of the field [of Artificial General Intelligence, or at least of aligning AGI]" (emphasis mine).
OpenAI folks? Anthropic? All the AI pundits and pundit wannabies so loud these days? They're all people directly or indirectly influenced by Eliezer's writings, talking or dismissing or commenting the specific concerns Eliezer described/popularized more than a decade ago. Hell, I've been on HN for more than a decade, and in that time, GAI existential risks, and X-risks in general, have always been strongly associated with Eliezer and LessWrong crowd (read: them being mocked for indulging in delusions and fear-mongering).
So don't tell me Yudkowsky isn't a "leader in the field", when the very reason he's invited to the table with all the current AI movers and shakers is precisely because the latter recognize his contributions are foundational to the field. Sure, he mostly wrote a lot of blog articles dealing with topics most considered purely theoretical then, but the field was started in the earnest by people who read that blog, making it a foundational mythos of modern AI research.
And I'd definitely say he's a legitimate leader in the field; more than that, I'm inclined to agree with Time Magazine on this - in the byline, Eliezer is said to be "widely regarded as a founder of the field [of Artificial General Intelligence, or at least of aligning AGI]" (emphasis mine).
OpenAI folks? Anthropic? All the AI pundits and pundit wannabies so loud these days? They're all people directly or indirectly influenced by Eliezer's writings, talking or dismissing or commenting the specific concerns Eliezer described/popularized more than a decade ago. Hell, I've been on HN for more than a decade, and in that time, GAI existential risks, and X-risks in general, have always been strongly associated with Eliezer and LessWrong crowd (read: them being mocked for indulging in delusions and fear-mongering).
So don't tell me Yudkowsky isn't a "leader in the field", when the very reason he's invited to the table with all the current AI movers and shakers is precisely because the latter recognize his contributions are foundational to the field. Sure, he mostly wrote a lot of blog articles dealing with topics most considered purely theoretical then, but the field was started in the earnest by people who read that blog, making it a foundational mythos of modern AI research.