I recently gave a talk at an internal Elastic conference about prompt engineering techniques in 2025, accompanied by live demonstrations using Mistral via Ollama, VS Code Copilot, and Windsurf. Here’s a brief overview of the key topics covered.
Why Prompt Engineering Matters #
Prompt engineering dramatically improves output quality, enables structured responses, and allows the use of smaller, more cost-effective models. The economic impact is substantial:
Model | Input (per 1M tokens) | Output (per 1M tokens) |
---|---|---|
GPT-4.1 | $2.00 | $8.00 |
Mistral Small 3 | $0.10 | $0.30 |
Core Concepts Covered #
My presentation explored several key areas:
- Local models and tools: Running models locally with tools like Ollama for privacy, cost savings, and compliance
- Fundamental techniques: Zero-shot vs. few-shot prompting, system prompts, and chain-of-thought reasoning
- Advanced workflows: Prompt chaining, visualizations with Mermaid, and coding assistant integration
- Code generation patterns: From idea refinement to implementation planning and execution - perhaps the most impressive part of the presentation was the complete codegen workflow demonstration (as described by Harper Reed in this blog post)
Live Demonstrations #
The talk featured hands-on demonstrations of:
- Running Mistral locally through Ollama
- Using VS Code Copilot for code understanding and refactoring
- Exploring agentic workflows with Windsurf
These demonstrations showed how these techniques work in practice for both simple interactions and complex development tasks.
The Experimentation Mindset #
The most successful prompt engineers approach LLMs with:
- Clear goals
- Structured inputs
- Iterative refinement
Next Steps #
The full presentation slides include detailed examples, demonstrations, and practical workflows not covered in this brief summary. If you’re looking to improve your LLM interaction skills, I encourage you to check them out. (It’s slighly sanitized to remove any sensitive information.)
Resources #
Prompt Engineering
Everything I’ll forget about prompting LLMs •
Anthropic: Prompt engineering guide
General Resources
Simon Willison’s Weblog: you can start here •
Pragmatic engineer podcast on building Windsurf
Codegen Workflow
Using LLMs and Cursor to become a finisher •
My LLM codegen workflow
What prompt engineering techniques have you found most effective?
>> Home