AI

Philosophy

  1. Three key features of AI: (i) alien, (ii) encyclopedic, and (iii) generative.
  2. The joint hypothesis problem applies: output quality is a function of model ability and user input.
  3. Use AI most aggressively when the output is verifiable. Examples: formatting, extraction, citation checking, code with tests, and document summarization.
  4. The human is the router.
  5. Taste, breadth, and judgment become more valuable.

Tools

  1. Chatbots: (i) ChatGPT, (ii) Claude, (iii) Gemini, (iv) Grok
  2. Search and deep research: (i) Perplexity, (ii) ChatGPT Deep Research, (iii) Gemini Deep Research
  3. Plugins: (i) Claude for Excel, Word, and PowerPoint, (ii) ChatGPT for Excel, (iii) Microsoft Copilot, (iv) Google Workspace AI
  4. Agents: (i) Claude Cowork and Claude Code; (ii) Codex

Best Practices

  1. Role, problem, action, and constraints. The model needs to know who it is acting as, what problem you are solving, what output you want, and what rules it must follow.
  2. Durable artifacts. Use Markdown as the default working format, attach source documents when possible, and ask for citations, page numbers, and short quotes when the output depends on evidence.
  3. Separate generation from verification. Use AI to draft, extract, summarize, or code, but check important outputs with tests, source documents, a second model, or a separate critic pass.
  4. Use agents for divided labor. Drafting, critique, editing, search, and coding can often be split across agents, but coordination should run through clear Markdown handoff files.
  5. Manage input and attention. Use voice input for long prompts, keep context small and structured, and close loops before opening new ones.

Implications

  1. Garicano, Li, and Wu, "Weak Bundle, Strong Bundle", 2026. The same AI capability can be substitution or augmentation depending on how easily a job can be split into tasks.

    MoreLess

    Labor markets price jobs, not isolated tasks. Weak bundles let AI peel away codifiable work, narrow the human role, and create a capacity shock in the remaining task; strong bundles keep AI inside the job and preserve more of the human revenue share. The empirical object is not just exposure, but coordination costs, accountability, shared context, and demand elasticity.

  2. Garicano and Rayo, "Training in the Age of AI", 2025. AI can break apprenticeship by automating the junior work that used to pay for training.

    MoreLess

    The key statistic is the expertise leverage ratio: AI-augmented expert value relative to AI-alone entry output. If the ratio is high enough, training remains viable; if it is too low, training compresses and can collapse once onboarding costs are included. The central question is whether AI raises expert value enough to replace the missing junior-production surplus.

  3. Brynjolfsson, Li, and Raymond, "Generative AI at Work", 2023/2025. AI raises customer-support productivity mostly by lifting novices toward expert practices.

    MoreLess

    In a rollout to 5,179 agents, productivity rises 14% on average and 34% for novice and lower-skilled workers. The mechanism appears to be diffusion of tacit best practices from stronger agents, with evidence from recommendation adherence, outage periods, and convergence in communication patterns. The paper is a strong micro result, but it cannot observe wage, hiring, or aggregate labor-demand responses.

  4. Cruces, Fernandez Meijide, Galiani, Galvez, and Lombardi, "Does Generative AI Narrow Education-Based Productivity Gaps?", 2026. AI narrows education-based productivity gaps at the task-execution margin.

    MoreLess

    In a randomized experiment, the education gap falls from 0.548 to 0.139 standard deviations with AI access. A follow-up task without AI shows no evidence of over-reliance and some carryover for lower-education participants, but a residual education gap remains. The interpretation is task-level skill leveling, not the disappearance of human capital.

  5. Dell'Acqua et al., "Navigating the Jagged Technological Frontier", 2023. AI helps inside its frontier and can hurt outside it.

    MoreLess

    BCG consultants using GPT-4 do substantially better on tasks inside the frontier, completing more work faster and at higher quality. On a task deliberately placed outside the frontier, however, AI users are 19 percentage points less likely to reach the correct answer. The frontier is jagged, so the practical skill is knowing when to delegate, when to integrate, and when to interrogate.

  6. Noy and Zhang, "Experimental Evidence on the Productivity Effects of Generative Artificial Intelligence", 2023. ChatGPT makes short professional writing faster and better, but mostly by substituting for effort.

    MoreLess

    Completion time falls by about 37% and quality rises by about 0.45 standard deviations. Lower-baseline writers benefit more, compressing the productivity distribution. Many treated participants submit lightly edited AI output, so the experiment shows immediate speed and quality gains more than deep human-machine complementarity.

  7. Bick, Blandin, and Deming, "The Rapid Adoption of Generative AI", 2024/2025. AI adoption is broad, but work-hour intensity is still modest.

    MoreLess

    By late 2024, nearly 40% of U.S. adults aged 18-64 had used generative AI, but only 1-5% of all work hours were AI-assisted. Self-reported time savings imply a potential aggregate productivity gain around 1.1% at current usage. The macro effect depends less on whether people have tried AI and more on repeated use, workflow redesign, and firm-level integration.

  8. Humlum and Vestergaard, "Still Waters, Rapid Currents", 2025/2026. AI reorganizes tasks before it shows up in wages or hours.

    MoreLess

    Danish linked survey and administrative data show adoption, time savings, and new AI-related tasks in content generation, oversight, and integration. But earnings and recorded hours show precise null effects two years after ChatGPT. The early effect is beneath the surface of standard labor statistics: work changes before wages and hours do.

  9. Dillon, Jaffe, Immorlica, and Stanton, "Shifting Work Patterns with Generative AI", 2025. Individual AI access saves time but does not automatically redesign work.

    MoreLess

    In a field experiment across 66 firms, Copilot users spend about two fewer hours per week on email and less time outside regular hours. Meetings, document creation, email volume, and task composition do not move much. The distinction is important: tool access can create private time savings without changing the organization of production.

  10. Eisfeldt, Schubert, and Zhang, "Generative AI and Firm Values", 2023/2024. Markets initially priced AI exposure as a labor-cost reduction for firms.

    MoreLess

    High-exposure firms rose about 5% relative to low-exposure firms after ChatGPT's release, especially when they had valuable data assets. The labor evidence points toward substitution: exposed occupations see lower job postings and lower relative wages, particularly when exposure is concentrated in core tasks rather than supplemental ones. The market reaction looks less like generic optimism and more like expected cost savings.

  11. Acemoglu, Autor, and Johnson, "Building Pro-Worker Artificial Intelligence", 2026. AI becomes pro-worker when it creates demand for new human expertise.

    MoreLess

    The paper distinguishes labor augmentation, capital augmentation, automation, expertise leveling, and new task creation. Only new task creation is unambiguously pro-worker, because it expands demand for human capabilities rather than commodifying them. The direction of AI is therefore not technologically fixed: developer incentives, buyer incentives, tax policy, antitrust, procurement, and rights over worker expertise all matter.

  12. Acemoglu, "The Simple Macroeconomics of AI", 2024. Task exposure translates into much smaller medium-run macro gains than the largest AI forecasts imply.

    MoreLess

    The paper estimates an upper-bound TFP gain of about 0.66% over ten years, falling to about 0.53% after accounting for hard-to-learn tasks. GDP can rise somewhat more through investment, but the welfare-relevant productivity number is modest relative to aggressive forecasts. The distributional effect may still be large if AI widens the capital-labor income gap.

  13. Jones, "A.I. and Our Economic Future", 2026. The macro question is whether AI is another general purpose technology or a technology that automates intelligence itself.

    MoreLess

    Weak-links logic disciplines the extreme case: automating one task raises output in proportion to that task's initial economic share. Automating software alone is not the same as automating all cognition, and even broad cognitive automation may generate a large but gradual transition. The long-run issues become distribution, meaningful work, catastrophic risk, and what humans still own when intelligence is cheap.

  14. Athey and Scott Morton, "Artificial Intelligence, Competition, and Welfare", 2025. AI market power can turn productivity gains into worker losses and upstream rents.

    MoreLess

    AI is often a priced upstream input, not a free productivity shock. If displaced workers absorb the substitution while concentrated AI suppliers capture fees, workers can suffer double harm from both displacement and AI pricing power. Broad gains require competitive AI pricing and productive absorption of displaced labor in other sectors.

Resources

  1. Workflow