Hmm... Maybe I'm missing something here but if there really are existential risks, I can't think of anything more important to dedicate one's 'career' to, whether or not that forces one to reinvent themselves if the Risky Thing were to be solved.
Nothing sad at all about that, in fact that's a great outcome!
It honestly sounds like a cowardly argument to me. It's the fear in the back of someone's mind saying, "yeah that's a hard problem, and somebody needs to give it attention, but what happens if I'm not the one who solves it, or if it solves itself? Just leave that risk to someone else, better for you to find easier pickings."
Preventing terrible outcomes from occurring in the first place isn't really a sexy thing to do, and never has been. It requires conviction and self sacrifice that few have the spine for, and yes, as you point out here, the effort may be wasted in the end for some unknown unknown reason.
I don't see these lives and careers as tragic wastes though, they seem heroic.
Eliezer in particular could very happily hang up his x-risk hat and be a science fiction writer. His life would straightforwardly be better and he'd be financially fine.
Your advice is fine for, like, a twentysomething kid who went into "x-risk" when it was already a career field and doesn't know who he'd be without that identity.
interesting essay, though it seems fairly short, and doesn’t seem to address what seem to me to be fairly obvious counterarguments. i’m confused about that!
(independent of the verity of any of the arguments you made), the arguments you cited seem like reason to merely be more suspicious of folks who have significant career capital in fighting x-risk than one otherwise might; in other words, to raise the default level of scrutiny in what they say on relevant topics. that line of argument seems right to me; i agree with it.
it seems like you’ve extended the arguments you made way, way beyond what seems appropriate to me, that you’ve extended them all the way to “basically don’t trust [folks who have significant career capital invested in fighting x-risk] with their takes on x-risk,” and extended that to “basically don’t invest career capital into fighting x-risk, lest you become one of those in the aforementioned category.” this seems… bizarrely/pretty clearly wrong.
if someone told me that they’re working to solve some problem that their success would put them out of a job, i would (a) ask them if they’ve thought about this conflict of interest, and (b) what they’ve done about it. for me, solid answers to those questions — as with any conflict of interest! — would solidly increase my trust in them. positive example: if an x-risk researcher said “i used to be in quant finance, and made a lot more money; if i succeed, i’ll simply go back to getting rich.” negative example: if an x-risk researcher said “oh, uh, idk. haven’t really thought about it that much.” another negative example (for me, at least): if they said “yeah, i’ve thought about it a lot, it just doesn’t seem that worth investing in for me.”
(again, this doesn’t even touch on the extent to which your arguments are true; just the extent to which they entail the conclusion you draw. the first thing that comes to mind is that e.g. eliezer would, i imagine, be quite happy to leave alignment research in favor of publishing harry potter fanfic, or writing about rationality stuff on lesswrong. happy to talk more about ways in which i think your arguments might be flawed, if you’re interested, when i get to my laptop & can write stuff while seeing the essay at the same time!)
Seems to me that working on an existential risk has benefits even if the risk is not truly existential but still damaging. A meteor is very different than AI: if the meteor doesn't hit, then nothing happens. If AI doesn't kill us, it's still dangerous and should be stopped. Therefore, working against it still has meaning even if the risk is not what it seems.
In my opinion also, the dehumanizing property of AI and it's prime use as a tool to create a dystopia is already an existential risk, because existing within a prison of technology is not real existence at all, but a transformation into an automaton to serve nothingnness.
And it's funny I was just thinking about this question yesterday.
Today, I would probably try and argue for a change of career definition and expectations. What does it mean to build a career around something and to fail to pivot when the job is not relevant anymore? Can't one bring one's talents and learnings and reputation and network to another, next goal?
Or is forging a career more or less automatically a one way street and one needs to get lucky that one can stay on this one path before it becomes obsolete? Is the alternative not a career but just muddling through?
Else, I'd say yes, it makes sense to regularly check if your expected value rationale for your project/job/career still makes sense.
(And in the sense of lost opportunities, I'd tend to say bright minds who could have invented stuff and instead pursue a career in money making/ finance is high up there. Depending on which existential risks the people you have in mind are working on, I would expect higher value collateral results from them than from the ex tech, now finance people)
I'm not sure I follow the argument - traffic signals clearly have a role to play in ensuring safe & efficient travel, don't they? Its definitely a harder career to go aginst the grain but to claim there is no value in such a career is overblown.
It’s likely (~50% at elite American colleges) these folks would have gone into management consulting, big law, or finance instead had they not gotten nerd sniped in hs/college. I find these jobs more socially interesting than those.
Hmm... Maybe I'm missing something here but if there really are existential risks, I can't think of anything more important to dedicate one's 'career' to, whether or not that forces one to reinvent themselves if the Risky Thing were to be solved.
Nothing sad at all about that, in fact that's a great outcome!
It honestly sounds like a cowardly argument to me. It's the fear in the back of someone's mind saying, "yeah that's a hard problem, and somebody needs to give it attention, but what happens if I'm not the one who solves it, or if it solves itself? Just leave that risk to someone else, better for you to find easier pickings."
Preventing terrible outcomes from occurring in the first place isn't really a sexy thing to do, and never has been. It requires conviction and self sacrifice that few have the spine for, and yes, as you point out here, the effort may be wasted in the end for some unknown unknown reason.
I don't see these lives and careers as tragic wastes though, they seem heroic.
Eliezer in particular could very happily hang up his x-risk hat and be a science fiction writer. His life would straightforwardly be better and he'd be financially fine.
Your advice is fine for, like, a twentysomething kid who went into "x-risk" when it was already a career field and doesn't know who he'd be without that identity.
interesting essay, though it seems fairly short, and doesn’t seem to address what seem to me to be fairly obvious counterarguments. i’m confused about that!
(independent of the verity of any of the arguments you made), the arguments you cited seem like reason to merely be more suspicious of folks who have significant career capital in fighting x-risk than one otherwise might; in other words, to raise the default level of scrutiny in what they say on relevant topics. that line of argument seems right to me; i agree with it.
it seems like you’ve extended the arguments you made way, way beyond what seems appropriate to me, that you’ve extended them all the way to “basically don’t trust [folks who have significant career capital invested in fighting x-risk] with their takes on x-risk,” and extended that to “basically don’t invest career capital into fighting x-risk, lest you become one of those in the aforementioned category.” this seems… bizarrely/pretty clearly wrong.
if someone told me that they’re working to solve some problem that their success would put them out of a job, i would (a) ask them if they’ve thought about this conflict of interest, and (b) what they’ve done about it. for me, solid answers to those questions — as with any conflict of interest! — would solidly increase my trust in them. positive example: if an x-risk researcher said “i used to be in quant finance, and made a lot more money; if i succeed, i’ll simply go back to getting rich.” negative example: if an x-risk researcher said “oh, uh, idk. haven’t really thought about it that much.” another negative example (for me, at least): if they said “yeah, i’ve thought about it a lot, it just doesn’t seem that worth investing in for me.”
(again, this doesn’t even touch on the extent to which your arguments are true; just the extent to which they entail the conclusion you draw. the first thing that comes to mind is that e.g. eliezer would, i imagine, be quite happy to leave alignment research in favor of publishing harry potter fanfic, or writing about rationality stuff on lesswrong. happy to talk more about ways in which i think your arguments might be flawed, if you’re interested, when i get to my laptop & can write stuff while seeing the essay at the same time!)
Seems to me that working on an existential risk has benefits even if the risk is not truly existential but still damaging. A meteor is very different than AI: if the meteor doesn't hit, then nothing happens. If AI doesn't kill us, it's still dangerous and should be stopped. Therefore, working against it still has meaning even if the risk is not what it seems.
In my opinion also, the dehumanizing property of AI and it's prime use as a tool to create a dystopia is already an existential risk, because existing within a prison of technology is not real existence at all, but a transformation into an automaton to serve nothingnness.
I like some of the premises and conclusions.
And it's funny I was just thinking about this question yesterday.
Today, I would probably try and argue for a change of career definition and expectations. What does it mean to build a career around something and to fail to pivot when the job is not relevant anymore? Can't one bring one's talents and learnings and reputation and network to another, next goal?
Or is forging a career more or less automatically a one way street and one needs to get lucky that one can stay on this one path before it becomes obsolete? Is the alternative not a career but just muddling through?
Else, I'd say yes, it makes sense to regularly check if your expected value rationale for your project/job/career still makes sense.
(And in the sense of lost opportunities, I'd tend to say bright minds who could have invented stuff and instead pursue a career in money making/ finance is high up there. Depending on which existential risks the people you have in mind are working on, I would expect higher value collateral results from them than from the ex tech, now finance people)
I'm not sure I follow the argument - traffic signals clearly have a role to play in ensuring safe & efficient travel, don't they? Its definitely a harder career to go aginst the grain but to claim there is no value in such a career is overblown.
It’s likely (~50% at elite American colleges) these folks would have gone into management consulting, big law, or finance instead had they not gotten nerd sniped in hs/college. I find these jobs more socially interesting than those.