Effective Prompt Engineering in 2025

2025/05/16

Tags: llm prompt-engineering ai software-engineering talk windsurf

I recently gave a talk at an internal Elastic conference about prompt engineering techniques in 2025, accompanied by live demonstrations using Mistral via Ollama, VS Code Copilot, and Windsurf. Here’s a brief overview of the key topics covered.

Why Prompt Engineering Matters #

Prompt engineering dramatically improves output quality, enables structured responses, and allows the use of smaller, more cost-effective models. The economic impact is substantial:

ModelInput (per 1M tokens)Output (per 1M tokens)
GPT-4.1$2.00$8.00
Mistral Small 3$0.10$0.30

Core Concepts Covered #

My presentation explored several key areas:

Live Demonstrations #

The talk featured hands-on demonstrations of:

These demonstrations showed how these techniques work in practice for both simple interactions and complex development tasks.

The Experimentation Mindset #

The most successful prompt engineers approach LLMs with:

Next Steps #

The full presentation slides include detailed examples, demonstrations, and practical workflows not covered in this brief summary. If you’re looking to improve your LLM interaction skills, I encourage you to check them out. (It’s slighly sanitized to remove any sensitive information.)

Resources #

Prompt Engineering
Everything I’ll forget about prompting LLMsAnthropic: Prompt engineering guide

General Resources
Simon Willison’s Weblog: you can start here • Pragmatic engineer podcast on building Windsurf

Codegen Workflow
Using LLMs and Cursor to become a finisherMy LLM codegen workflow

What prompt engineering techniques have you found most effective?

>> Home