Claude Code Skills · 论文 · 写作流程与纪律

conference-paper-writing

Use when writing or revising ML or AI conference papers for venues such as NeurIPS, ICML, ICLR, ACL, AAAI, or COLM, especially when the workflow is conference-first rather than Nature-style journal-first.

Repo
Chanw-research/claude-code-paper-writing
Slug
conference-paper-writing

SKILL.md

Conference Paper Writing for Top AI Venues

Expert-level guidance for writing publication-ready papers targeting NeurIPS, ICML, ICLR, ACL, AAAI, and COLM. This skill combines writing philosophy from top researchers (Nanda, Farquhar, Karpathy, Lipton, Steinhardt) with practical tools: LaTeX templates, citation verification APIs, and conference checklists.

Core Philosophy: Collaborative Writing

Paper writing is collaborative, but Claude should be proactive in delivering drafts.

The typical workflow starts with a research repository containing code, results, and experimental artifacts. Claude's role is to:

  1. Understand the project by exploring the repo, results, and existing documentation
  2. Deliver a complete first draft when confident about the contribution
  3. Search literature using web search and APIs to find relevant citations
  4. Refine through feedback cycles when the scientist provides input
  5. Ask for clarification only when genuinely uncertain about key decisions

Key Principle: Be proactive. If the repo and results are clear, deliver a full draft. Don't block waiting for feedback on every section—scientists are busy. Produce something concrete they can react to, then iterate based on their response.


⚠️ CRITICAL: Never Hallucinate Citations

This is the most important rule in academic writing with AI assistance.

The Problem

AI-generated citations have a ~40% error rate. Hallucinated references—papers that don't exist, wrong authors, incorrect years, fabricated DOIs—are a serious form of academic misconduct that can result in desk rejection or retraction.

The Rule

NEVER generate BibTeX entries from memory. ALWAYS fetch programmatically.

Action✅ Correct❌ Wrong
Adding a citationSearch API → verify → fetch BibTeXWrite BibTeX from memory
Uncertain about a paperMark as [CITATION NEEDED]Guess the reference
Can't find exact paperNote: "placeholder - verify"Invent similar-sounding paper

When You Can't Verify a Citation

If you cannot programmatically verify a citation, you MUST:

% EXPLICIT PLACEHOLDER - requires human verification
\cite{PLACEHOLDER_author2024_verify_this}  % TODO: Verify this citation exists

Always tell the scientist: "I've marked [X] citations as placeholders that need verification. I could not confirm these papers exist."

Recommended: Install Exa MCP for Paper Search

For the best paper search experience, install Exa MCP which provides real-time academic search:

Claude Code:

claude mcp add exa -- npx -y mcp-remote "https://mcp.exa.ai/mcp"

Cursor / VS Code (add to MCP settings):

{
  "mcpServers": {
    "exa": {
      "type": "http",
      "url": "https://mcp.exa.ai/mcp"
    }
  }
}

Exa MCP enables searches like:

  • "Find papers on RLHF for language models published after 2023"
  • "Search for transformer architecture papers by Vaswani"
  • "Get recent work on sparse autoencoders for interpretability"

Then verify results with Semantic Scholar API and fetch BibTeX via DOI.


Workflow 0: Starting from a Research Repository

When beginning paper writing, start by understanding the project:

Project Understanding:
- [ ] Step 1: Explore the repository structure
- [ ] Step 2: Read README, existing docs, and key results
- [ ] Step 3: Identify the main contribution with the scientist
- [ ] Step 4: Find papers already cited in the codebase
- [ ] Step 5: Search for additional relevant literature
- [ ] Step 6: Outline the paper structure together
- [ ] Step 7: Draft sections iteratively with feedback

Step 1: Explore the Repository

# Understand project structure
ls -la
find . -name "*.py" | head -20
find . -name "*.md" -o -name "*.txt" | xargs grep -l -i "result\|conclusion\|finding"

Look for:

  • README.md - Project overview and claims
  • results/, outputs/, experiments/ - Key findings
  • configs/ - Experimental settings
  • Existing .bib files or citation references
  • Any draft documents or notes

Step 2: Identify Existing Citations

Check for papers already referenced in the codebase:

# Find existing citations
grep -r "arxiv\|doi\|cite" --include="*.md" --include="*.bib" --include="*.py"
find . -name "*.bib"

These are high-signal starting points for Related Work—the scientist has already deemed them relevant.

Step 3: Clarify the Contribution

Before writing, explicitly confirm with the scientist:

"Based on my understanding of the repo, the main contribution appears to be [X]. The key results show [Y]. Is this the framing you want for the paper, or should we emphasize different aspects?"

Never assume the narrative—always verify with the human.

Step 4: Search for Additional Literature

Use web search to find relevant papers:

Search queries to try:
- "[main technique] + [application domain]"
- "[baseline method] comparison"
- "[problem name] state-of-the-art"
- Author names from existing citations

Then verify and retrieve BibTeX using the citation workflow below.

Step 5: Deliver a First Draft

Be proactive—deliver a complete draft rather than asking permission for each section.

If the repo provides clear results and the contribution is apparent:

  1. Write the full first draft end-to-end
  2. Present the complete draft for feedback
  3. Iterate based on scientist's response

If genuinely uncertain about framing or major claims:

  1. Draft what you can confidently
  2. Flag specific uncertainties: "I framed X as the main contribution—let me know if you'd prefer to emphasize Y instead"
  3. Continue with the draft rather than blocking

Questions to include with the draft (not before):

  • "I emphasized X as the main contribution—adjust if needed"
  • "I highlighted results A, B, C—let me know if others are more important"
  • "Related work section includes [papers]—add any I missed"

When to Use This Skill

Use this skill when:

  • Starting from a research repo to write a paper
  • Drafting or revising specific sections
  • Finding and verifying citations for related work
  • Formatting for conference submission
  • Resubmitting to a different venue (format conversion)
  • Iterating on drafts with scientist feedback

Always remember: First drafts are starting points for discussion, not final outputs.


Balancing Proactivity and Collaboration

Default: Be proactive. Deliver drafts, then iterate.

Confidence LevelAction
High (clear repo, obvious contribution)Write full draft, deliver, iterate on feedback
Medium (some ambiguity)Write draft with flagged uncertainties, continue
Low (major unknowns)Ask 1-2 targeted questions, then draft

Draft first, ask with the draft (not before):

SectionDraft AutonomouslyFlag With Draft
AbstractYes"Framed contribution as X—adjust if needed"
IntroductionYes"Emphasized problem Y—correct if wrong"
MethodsYes"Included details A, B, C—add missing pieces"
ExperimentsYes"Highlighted results 1, 2, 3—reorder if needed"
Related WorkYes"Cited papers X, Y, Z—add any I missed"

Only block for input when:

  • Target venue is unclear (affects page limits, framing)
  • Multiple contradictory framings seem equally valid
  • Results seem incomplete or inconsistent
  • Explicit request to review before continuing

Don't block for:

  • Word choice decisions
  • Section ordering
  • Which specific results to show (make a choice, flag it)
  • Citation completeness (draft with what you find, note gaps)

The Narrative Principle

The single most critical insight: Your paper is not a collection of experiments—it's a story with one clear contribution supported by evidence.

Every successful ML paper centers on what Neel Nanda calls

同一分类的其他项