Two analyses

Made a calendar of predicted AI alignment events

Response to a comment on a project idea

Posted on LessWrong.

Ten project ideas up for votes

Posted five each on LessWrong and the MIRIxDiscord.

Comment on a comment on ‘Techniques for optimizing worst-case performance’

Pointing out the possible cause of a misunderstanding. Posted on LessWrong.

Comment on a comment on ‘Learning with catastrophes’

Pointing out that the article already addressed the issue of bad outcomes that the agent can’t be held responsible for. Posted on LessWrong.

Comment on ‘Capability amplification’

Clarifying something that confused me. Posted as a comment on LessWrong.

Question about ‘Factored Cognition’

Posted as a comment on LessWrong.

Job description for an independent AI alignment researcher

I’ve posted it on LessWrong and will add it to this page after I’ve gotten and incorporated feedback.

2019-09-21: Nobody commented on the LessWrong post. I added the job description to this page a while ago.

Small suggestion for ‘Iterated Distillation and Amplification’