Criteria most AI/ML researchers satisfy
AI/ML researchers commonly satisfy: scholarly authorship (top-tier conference papers), judging (peer review for NeurIPS, ICML, ICLR), original contributions (citations, downstream model fine-tunes, deployed systems), and critical role (lead researcher at a recognized lab).
- Scholarly authorship: peer-reviewed papers at NeurIPS, ICML, ICLR, ACL, CVPR, EMNLP, AAAI, KDD, etc.
- Judging: program-committee service, area-chair appointments, manuscript reviews
- Original contributions: citation count, downstream model adoption, benchmark records, deployed production systems
- Critical role: senior or lead researcher at a recognized lab, university, or company
- Awards: best-paper awards, fellowships, named scholarships
- Membership: invited memberships in professional societies of distinction
Citation thresholds that work
There is no fixed citation threshold in the regulations. In practice, AI/ML researchers with several hundred citations and at least one strongly cited first-author paper, plus the other criteria above, regularly succeed.
Citation evidence should be supplemented with downstream-impact narrative: which papers cited the work, what they did with it, and how the work shaped the field.
Letter strategy for AI/ML
Letters from independent researchers at peer institutions are the dominant evidence type. Six to nine letters, each tailored to one or two specific criteria, with concrete technical detail about the petitioner's contributions, dramatically outperform generic letters.