$ cat /blog/neovim-in-the-ai-world.md

Neovim in the AI World: A Practical AI-Enhanced Editing Workflow

A practical guide to AI-assisted editing in Neovim using LSPs and plugins for faster, focused coding with real-world tradeoffs.

Neovim in the AI World: A Practical AI-Enhanced Editing Workflow

I’ve been tinkering with AI-assisted editing in Neovim for months now. The goal isn’t “AI everywhere.” It’s a pragmatic workflow that shortens boilerplate, clarifies intent, and leaves me more time to actually ship features. Here’s the workflow I use every day, what actually works in practice, and where I’ve learned to draw lines.

What you’ll get from this post

  • A concrete, usable Neovim setup that blends built-in LSP with AI-assisted editing.
  • Real-world tradeoffs: latency, privacy, cost, and reliability.
  • Practical examples you can copy-paste into your own config and workflows.
  • A transparent look at how I balance AI suggestions with solid software engineering practices.

Why AI editing in Neovim now makes sense

Neovim’s editing model is fast, deterministic, and scriptable. AI adds value by reducing boilerplate, generating scaffolding, and explaining code in context. The combination is powerful when you keep it under control: AI suggestions should be optional, private by default, and clearly auditable. With LSP handling semantic editing and tooling, AI becomes a teammate that can draft comments, tests, and small refactors without taking over.

The core stack I rely on

  • Neovim 0.9+ for robust Lua configuration and built-in LSP
  • A modern plugin manager (Packerr or any you prefer)
  • LSP: nvim-lspconfig, mason.nvim, mason-lspconfig.nvim
  • Completion: nvim-cmp, cmp-nvim-lsp, cmp_luasnip
  • Snippets: LuaSnip
  • Quality and formatting: null-ls.nvim
  • AI-assisted editing: copilot.lua (GitHub Copilot for Neovim)
  • Optional future-proofing: a local, privacy-conscious fallback for AI (e.g., code templates, open models) if you need offline options

Getting started: a minimal, practical setup

The exact install method depends on your setup, but here’s a compact, battle-tested baseline you can adapt.

  1. Prerequisites
  • Neovim 0.9 or newer
  • Git, Node.js (for some language servers and tooling)
  • A working plugin manager (I use packer.nvim)
  1. Plugin declarations (example with packer.nvim)
-- plugins.lua
return require('packer').startup(function(use)
  -- Core LSP and toolchain
  use 'neovim/nvim-lspconfig'
  use 'williamboman/mason.nvim'
  use 'williamboman/mason-lspconfig.nvim'
  -- Completion + snippets
  use 'hrsh7th/nvim-cmp'
  use 'hrsh7th/cmp-nvim-lsp'
  use 'saadparwaiz1/cmp_luasnip'
  use 'L3MON4D3/LuaSnip'
  -- Quality and formatting
  use 'jose-elias-alvarez/null-ls.nvim'
  -- AI-assisted editing
  use 'github/copilot.lua'
end)
  1. Basic LSP and completion setup (example in init.lua or lua/plugins/config.lua)
-- Mason & LSP bootstrap
require("mason").setup()
require("mason-lspconfig").setup({ ensure_installed = { "tsserver", "pyright" } })

local lspconfig = require('lspconfig')
local cmp = require('cmp')

cmp.setup({
  snippet = {
    expand = function(args) require('luasnip').lsp_expand(args.body) end
  },
  mapping = {
    ['<C-b>'] = cmp.mapping.scroll_docs(-4),
    ['<C-f>'] = cmp.mapping.scroll_docs(4),
    ['<Tab>'] = cmp.mapping.select_next_item(),
    ['<S-Tab>'] = cmp.mapping.select_prev_item(),
    ['<CR>'] = cmp.mapping.confirm({ select = true }),
  },
  sources = {
    { name = 'nvim_lsp' },
    { name = 'luasnip' },
  },
})

local on_attach = function(client, bufnr)
  local opts = { noremap = true, silent = true, buffer = bufnr }
  vim.keymap.set('n', 'gd', vim.lsp.buf.definition, opts)
  vim.keymap.set('n', 'K', vim.lsp.buf.hover, opts)
end

-- Example servers
require('lspconfig').tsserver.setup({ on_attach = on_attach })
require('lspconfig').pyright.setup({ on_attach = on_attach })
  1. Copilot (AI) setup
-- Copilot config (basic)
require("copilot").setup({
  panel = { enabled = true },
  suggestion = { enabled = true, auto_trigger = true },
})

Important notes on AI plugin behavior

  • Copilot provides inline suggestions and a panel. You can accept with Tab (if you configure it that way) or dismiss with Esc.
  • Use Copilot as a drafting partner, not as a replacement for thinking through edge cases. Always validate with your tests and LSP feedback.
  • Privacy and scope matter. Avoid sending sensitive secrets to the cloud. If a project has sensitive data, consider AI-enabled workflows that operate on private copies or local models when feasible.

A practical AI-assisted editing workflow in action

Here’s how I actually operate on a typical feature, from idea to review, with AI as a collaborator.

  1. Define the problem and scaffold
  • Start with a quick explanation in a docstring or a comment block. The AI helps draft it, not decide it.
  • Example (Python) function outline:
def calculate_totals(items):
    """
    Calculate the total price for a list of items.
    Each item is a dict with 'price' and 'quantity'.
    Returns the grand total as float.
    """
    pass
  • I often ask Copilot to generate a realistic docstring or a function skeleton. If Copilot isn’t helpful, I write the skeleton and let LSP/typing fill in the type hints.
  1. Flesh out the implementation with AI-assisted suggestions
  • Start with the core logic. Let Copilot propose a working version; then I review and tailor it for performance and correctness.
  • Example: implementing the function with type hints in Python:
from typing import List, Dict

def calculate_totals(items: List[Dict[str, float]]) -> float:
    total = 0.0
    for item in items:
        total += item.get('price', 0.0) * item.get('quantity', 1.0)
    return total
  • If I want a more precise or safer variant, I’ll ask Copilot for a version that validates inputs and handles edge cases, and I’ll compare both with a quick unit test.
  1. Draft tests with AI assistance
  • I rarely rely on AI to write all tests, but an initial scaffold saves time. Then I tailor tests to reflect real edge cases.
import unittest

class TestCalculateTotals(unittest.TestCase):
    def test_basic_case(self):
        items = [
            {'price': 10.0, 'quantity': 2},
            {'price': 5.0, 'quantity': 3},
        ]
        self.assertAlmostEqual(calculate_totals(items), 35.0)

    def test_missing_keys(self):
        items = [{'price': 3.0}, {'quantity': 4}]
        self.assertAlmostEqual(calculate_totals(items), 3.0)

if __name__ == '__main__':
    unittest.main()
  • Copilot can sketch a test file quickly; I then prune it to match the project’s testing conventions (pytest, unittest, property-based tests, etc.).
  1. Linting, formatting, and semantic checks
  • I rely on null-ls.nvim to wire formatters and linters to the editor, so the AI-generated code is judged by the same quality standards as the rest of the codebase.
  • Example: set up Prettier for JS/TS, Black for Python, isort for Python imports.
-- null-ls setup (example)
local null_ls = require("null-ls")
null_ls.setup({
  sources = {
    null_ls.builtins.formatting.prettier,
    null_ls.builtins.formatting.black.with({ extra_args = { "--line-length", "88" } }),
    null_ls.builtins.diagnostics.eslint,
  },
})
  1. Refactor with AI help, but with guardrails
  • If I spot a refactor opportunity (rename, extract function, etc.), I’ll ask the AI to propose two or three options and then pick the one aligned with performance and readability.
  • The final decision always rests with human judgment, verified by tests and code reviews.

Two concrete workflow patterns you can adopt today

Pattern A: AI-assisted drafting for new modules

  • Create a minimal scaffold with your preferred language idioms.
  • Use Copilot to write a first pass of the module’s API surface and a couple of essential functions.
  • Review, refine, and ensure type-safety and edge-case handling with LSP diagnostics and unit tests.

Pattern B: AI-driven explanation and learning flow

  • When you’re stuck, ask Copilot to generate a concise explanation of a failing snippet or a complex algorithm in plain English (and then translate that into code).
  • Use the explanation as a checklist for implementing the fix yourself. This avoids blindly trusting AI for correctness.

Real-world tradeoffs I’ve learned

  • Latency and responsiveness: AI suggestions add latency. You’ll want a configuration that defaults to non-blocking editing when the AI is busy. The panel approach helps—AI content lives in a side pane, while you keep typing in the main buffer.
  • Privacy and data handling: AI code suggestions mean code is being sent to a service. For sensitive code or credentials, disable AI in those files or use a workspace filter. If you can, keep AI-enabled work to non-critical components and use local tooling for sensitive modules.
  • Cost and sustainability: AI usage incurs cost (clipboard-level usage, token-based pricing). I reserve AI for exploration and scaffolding rather than core revenue-critical logic. You should also track AI-assisted edits to minimize wasteful churn.
  • Reliability vs. creativity: AI helps with boilerplate, but it’s not a substitute for domain knowledge. I rely on LSP-driven type checks, tests, and code review to ensure quality. AI is a helper, not a replacement for engineering discipline.

Measuring value: a simple, practical approach

  • Time-to-first-working-feature: compare a feature built with and without AI assistance. I’ve seen a 15–40% reduction in initial draft time for boilerplate code and comments in real projects.
  • Defect rate in the early stages: track defects found during code reviews and first-pass tests. If AI-generated code consistently introduces subtle edge-case bugs, you’ll know to tighten guardrails.
  • Developer happiness: a quick check-in on whether the AI helps reduce mental load. If it increases cognitive overhead, dial back the AI intensity.

Security and governance: guardrails that matter

  • Keep secrets out of AI prompts. Use environment variables or secret-management patterns.
  • Disable or sandbox AI interactions for critical modules. Create a policy for when AI can be used (e.g., only for new modules or non-sensitive sections).
  • Use code reviews as the ultimate gatekeeper. AI should accelerate reviews, not replace them.

A practical day-in-the-life routine for a bootstrapped solo dev

  • Start with a project-wide health check: open terminal, run tests, verify linting and formatting pipelines.
  • Open the feature branch in Neovim. Use LSP for fast navigation and type checks. The AI helper drafts docstrings and small scaffolds as you go.
  • Implement the core logic with your own reasoning, stepping through edge cases. Let AI fill in routine comments and unit-test scaffolds.
  • Run the test suite. If tests fail, analyze the failure locally; AI can help draft patches, but you should reason through the fix.
  • Review your changes in the AI-assisted view. Use Copilot’s suggestions to clean up comments, improve function signatures, or rephrase error messages.
  • Commit with a precise message and a brief rationale for the AI-assisted changes.

Concrete tips and tricks that delivered real value

  • Keep a tight loop on your editor latency. If your AI panel is slow, treat it as optional and keep the core editing experience snappy.
  • Use language-specific LSP servers to maximize correctness. AI shines when the codebase is strongly typed and well-OOP/functional in style.
  • Create small, testable AI tasks. Instead of asking AI to “rewrite the module,” ask for “a 3-line docstring, plus a 5-line unit test scaffold.”
  • Maintain a personal “patterns” library. When you learn a good AI-assisted pattern (e.g., how to auto-generate a REST client, or how to craft robust tests), store it for future reuse.

A sample before/after AI-assisted snippet

Before (manual, boilerplate-focused):

def fetch_user(user_id):
    # TODO: implement
    return None

After (AI-assisted, with guardrails):

def fetch_user(user_id: int) -> dict:
    """
    Retrieve a user by ID.

    Args:
        user_id (int): The unique identifier for the user.

    Returns:
        dict: User object with keys 'id', 'name', 'email', or raises ValueError if not found.
    """
    if user_id <= 0:
        raise ValueError("user_id must be positive")

    user = _db.find_one({"id": user_id})
    if user is None:
        raise ValueError(f"User with id {user_id} not found")

    return {"id": user["id"], "name": user["name"], "email": user["email"]}

In this example, the AI drafted a docstring and added basic input validation, while I reinforced type hints, error handling, and a clear return shape.

Conclusion and takeaways

  • A well-tuned AI-assisted editing workflow in Neovim can save meaningful time on boilerplate and scaffolding, without surrendering control to the machine.
  • The key is to combine reliable tooling (LSP, formatters, tests) with AI as a drafting partner, not a replacement for judgment.
  • Keep privacy and guardrails in mind: selectively enable AI, especially on sensitive code, and rely on unit tests and code reviews to ensure quality.
  • Start small: adopt AI for docstrings, skeletons, and test scaffolds first, then scale as you gain confidence.
  • Track impact: measure time-to-ship and defect rate, and adjust AI usage accordingly.

If you want to see more concrete configs, I’ve kept this approach lean so you can adapt it to your stack (TypeScript, Python, Go, etc.). I’ll likely publish an updated guide with language-specific tweaks and real-world benchmark notes as I iterate.

Actionable takeaways

  • Install Copilot and a minimal LSP setup, then use AI for docstrings and scaffolding first.
  • Keep AI-disabled on sensitive files; use local tests to validate the AI-generated code.
  • Treat AI as a teammate: review, refine, and only then commit. The editor should accelerate work, not replace fundamental engineering rigor.

If you found this helpful, I’d love to hear how you integrate AI into your Neovim workflow. Reach out on X (@fullybootstrap) and share your own pragmatic patterns. And if you want deeper dives into specific language stacks or a longer “how I actually do X” series, drop a note — I’m happy to expand.